US Companies Increase Web3 Projects by 39% in 2023 – Coinbase Study 🚀

A survey of Fortune 500 executives revealed that 56% of their companies are working on Web3 projects. Key sectors driving demand include spot ETFs, real asset tokenization, and stablecoins. 📈

Despite growing corporate interest in crypto initiatives, the US has lost its share of developers over the past five years, with only 26% of crypto developers based in the country. 🇺🇸

Today, the total volume of Bitcoin ETF spot assets exceeds $63 billion. Government securities in the network are generating new interest in real asset tokenization. 💰

In finance: Bitcoin ETFs drive initiatives at Bank of America, Wells Fargo, and Morgan Stanley. Goldman Sachs, JPMorgan, and Citi are authorized participants in Bitcoin ETFs. Citi and Goldman conducted tokenization tests. 🏦

In tech: Google announced blockchain information search capabilities and infrastructure support as a validator for several new blockchains. Microsoft tested the Canton Network for asset tokenization. 💻

🏅 Exciting news about GenAI implementation in real life! 


Peacock is revolutionizing how we watch the Olympics with personalised recaps voiced by the legendary Al Michaels, powered by AI! 🎤🤖 Get your customised highlights tailored to your preferences and experience the games like never before. This blend of cutting-edge technology and iconic commentary is set to enhance fan engagement and bring the Olympics closer to you. Don’t miss out on this innovative feature! 

🚨 $2 Trillion by 2030: Tokenised Assets Market – Insights from McKinsey Report 📈

Key takeaways from this report:

1. #Tokenization involves creating unique digital assets on a #blockchain.

2. Tokenization is already happening on a large scale – trillions of dollars are transacted on the blockchain monthly.

3. Larry Fink from #BlackRock envisions widespread #tokenization across all financial assets.

4. The market capitalization of tokenized assets could reach around $2 trillion by 2030.

5. The adoption of tokenization is expected to occur in multiple waves, starting with use cases that have proven ROI.

6. Broader adoption faces regulatory challenges and technical complexities.

7. The most popular asset classes likely to lead in tokenization are #cash and #deposits, #bonds and exchange-traded notes (#ETNs), #mutualfunds and #ETFs, as well as #loans and #securitization.

8. Tokenized assets can improve operational efficiency, reduce costs, and increase transaction speed.

AIME, the AI doctor, is poised to significantly improve the quality of life for millions globally. 

This innovative project has shown remarkable potential in various aspects of medical care. The development team conducted extensive tests, evaluating AIME across 32 categories including diagnosis, empathy, quality of treatment plans, and decision-making efficiency. Impressively, AIME outperformed human doctors in 28 of these categories and matched them in the remaining four.

The training approach for AIME was particularly groundbreaking. Utilizing a self-play method, three independent agents (a patient, a doctor, and a critic) conducted over 7 million simulated consultations. In comparison, a human doctor typically performs only a few tens of thousands of consultations over their entire career. This vast experience enables AIME to deliver high-quality medical services to 99% of the global population who cannot afford personal doctors. In a few years, AIME is expected to surpass most general practitioners, radiologists, and pediatricians in performance. It offers tireless service, is conditionally free, and has instant access to vast medical literature, having been trained on millions of patient interactions.

However, the priority in medicine is “do no harm.” Since publishing their report in January, the team has focused on improving the product, enhancing safety, and preparing for necessary FDA and other regulatory approvals. While widespread adoption won’t happen overnight, the technical feasibility of AIME is already a reality. 🌍💡

🚀 Introducing Florence-2: A Breakthrough in Vision Foundation Models!

We’re thrilled to introduce Florence-2, a pioneering vision foundation model designed to excel in diverse computer vision and vision-language tasks. 🌟 Using a unified, prompt-based approach, Florence-2 handles everything from captioning to object detection with simple text instructions. Trained on FLD-5B, a dataset boasting 5.4 billion annotations across 126 million images, it sets new standards in zero-shot and fine-tuning capabilities.

Explore how Florence-2 is revolutionising the field! 🌐

Moshi, Kyutai’s real-time voice assistant!

🚀 In Case You Missed This Breathtaking News! 🚀

Introducing Moshi, Kyutai’s real-time voice assistant! Developed by our 8-member team in just 6 months, Moshi is set to revolutionize voice interaction.

🔍 Key Features:

Multimodal LM: Speech in, speech out.

Fast Processing: Achieves 160ms latency.

Helium 7B: Our powerful base text language model.

Mimi Codec: In-house VQ-VAE with 300x compression.

Expressive TTS: 70 emotions and styles supported.

🔧 Training & Safety:

Fine-Tuned: 100K detailed transcripts.

Quick Adaptation: Fine-tunes with <30 mins of audio.

On-Device: Runs on laptops/consumer GPUs, no internet needed.

🌐 This breakthrough will transform human-machine interaction, aid disabilities, assist in research, and more! Experience the future of voice assistants now!

Kyutai unveils today the very first voice-enabled AI openly accessible to all

Moshi AI: Real-Time Personal AI Voice Assistant – Beats GPT-4o!

Insights from Mira Murati’s (OpenAI CTO) – Evolving intelligence of GPT models. 🌟

We are excited to share a snippet from Mira Murati’s recent interview, where she discussed the evolving intelligence of GPT models. Mira likened GPT-3 to young children, GPT-4 to high school students, and projected that within the next year and a half, we might see models reaching PhD-level intelligence for specific tasks. 🎓🤖

📹 Watch the interview here: YouTube Link

What caught my attention was the similarity to a thesis from Situational Awareness: The Decade Ahead by Leopold Aschenbrenner. His predictions, based on training computations, suggested:

• GPT-2 was at a preschool level 🧸

• GPT-3 at an elementary school level 📚

• GPT-4 at an intelligent high school level 🧑‍🎓

• PhD-level models are on the horizon 🎓🔭

This similarity likely isn’t coincidental. I see three possibilities:

1. This might be a common internal framework at OpenAI.

2. Mira developed this perspective independently.

3. Mira was influenced by Leopold’s work.

I believe it’s almost certainly the first, as the timelines align closely with those of OpenAI’s in-house philosopher and predictor, Daniel Kokotajlo. His role involved assessing technological development timelines and planning integration measures. He predicted AGI by 2027, the same year OpenAI aimed to complete the now-defunct Superalignment project, preparing for superintelligence. 🧠✨

Regardless of your stance on these predictions, it’s intriguing to consider that this could be a reflection of OpenAI’s internal vision and forecasts, guiding their discussions and strategic planning. They envision achieving AGI (defined as expert-level in most economically significant tasks) in 3-4 years. This doesn’t imply that GPT-X will replace humans in most jobs due to regulations, implementation challenges, and potential human resistance. Such a system could be developed but not announced, or announced but held back until regulations are in place.

“Attention Is All You Need” – The landmark paper revolutionised the field of NLP and ML


The landmark paper “Attention Is All You Need” by Vaswani et al. (2017) revolutionised the field of natural language processing (NLP) and machine learning by introducing the Transformer model. Unlike previous models that relied heavily on recurrent neural networks (RNNs) and convolutional neural networks (CNNs), the Transformer employs a novel mechanism known as “attention” to process sequential data.

At the core of the Transformer is the self-attention mechanism, which allows the model to weigh the importance of different words in a sentence, regardless of their position. This innovation addresses the limitations of RNNs, which struggle with long-range dependencies and sequential processing. The self-attention mechanism enables the model to capture complex relationships and dependencies in data, significantly improving performance and efficiency.

Key advantages of the Transformer model include:

1. Parallel Processing: Unlike RNNs, which process data sequentially, the Transformer can handle entire sequences in parallel, drastically reducing training times and making it more scalable for large datasets.

2. Enhanced Performance: The ability to focus on relevant parts of the input data leads to better understanding and generation of language, resulting in state-of-the-art performance on various NLP tasks such as translation, summarization, and text generation.

3. Flexibility: The Transformer architecture is highly adaptable and has been successfully applied to various domains beyond NLP, including computer vision and reinforcement learning.

The impact of this model is profound, as it has set new benchmarks in multiple applications and inspired the development of advanced models like BERT, GPT, and T5. For business managers, understanding the Transformer model is crucial as it underpins many AI-driven innovations that can enhance customer experiences, streamline operations, and provide deeper insights from data. Embracing these technologies can offer a competitive edge in today’s data-driven market.

T5, or Text-To-Text Transfer Transformer, is a model introduced by Google Research in 2019. It is designed to handle a wide variety of natural language processing (NLP) tasks using a unified text-to-text framework. This means that any NLP task is converted into a text generation problem. For instance, translation, summarization, question answering, and classification are all formatted as text input and text output pairs.

The key features of T5 include:

1. Unified Framework: By converting all tasks into a text-to-text format, T5 simplifies the process of training and fine-tuning models on different tasks, leveraging transfer learning across them.

2. Pre-training on Massive Data: T5 is pre-trained on a diverse and large dataset called C4 (Colossal Clean Crawled Corpus), which helps it learn robust language representations.

3. Scalability: The model can be scaled to different sizes, from small models that can run on modest hardware to very large ones requiring substantial computational resources, allowing it to adapt to various needs and environments.

4. State-of-the-Art Performance: T5 has achieved state-of-the-art results on multiple benchmarks, demonstrating its versatility and effectiveness across different NLP tasks.

Overall, T5’s approach of treating all tasks as text generation problems has streamlined the development of NLP models and pushed the boundaries of what can be achieved with transfer learning in this domain.

🚀 Anthropic’s New Flagship LLM: Claude 3.5 Sonnet 🚀

Anthropic has launched Claude 3.5 Sonnet, the latest version of their flagship LLM and a formidable competitor to ChatGPT. 🧠 It outperforms GPT-4o on some benchmarks and offers better cost efficiency.

🔍 Key Improvements:

1️⃣ Enhanced chart and diagram recognition. From my testing, Claude 3.5 Sonnet often interprets infographics better than GPT-4o.

2️⃣ Lower cost — $3 per 1M tokens versus GPT-4o’s $5. Note: This applies to English text only; for Russian, OpenAI’s tokenizer makes Claude pricier.

3️⃣ Try it for free on claude.ai (phone verification required).

💡 Our Take:

Claude delivers concise, clear answers, unlike ChatGPT’s verbosity. It’s also faster! For now, I recommend using both models to compare and verify responses, as GPT-4o is constantly improving. 🔄