- 1Node AI
- Posts
- Funding, Future, & Apple's AI Vision
Funding, Future, & Apple's AI Vision
Catch up on the megadeals, voice AI, and Apple's latest moves.

Editor's Note
The AI world didn’t catch its breath this week. From Meta quietly writing the biggest private check in AI history to Apple’s new announcements, the past seven days felt like a trailer for the next decade.
Expect funding fireworks, voice tech that blurs the human line, and a design leap that hints at glassy interactions. Let’s dive in.
Meta is reportedly negotiating a colossal $10 billion investment for data-labeling powerhouse Scale AI—easily the largest single private funding round in the sector.
The details:
Financial Times says Meta would take a minority stake but secure preferential access to Scale’s training data services.
The deal would value Scale at ~$20 billion despite a VC winter hitting most startups.
Scale’s annotation pipelines feed models at OpenAI, Anthropic and the U.S. Department of Defense.
Meta has warned investors that its own AI push will “meaningfully” raise capex this year.
Mark Zuckerberg is so frustrated with Meta’s standing in AI that he's willing to spend billions of dollars to convince Scale AI CEO Alexandr Wang to join his company, people familiar with the matter told CNBC.
Read the full report here: cnb.cx/4416NQd
— CNBC International (@CNBCi)
1:53 AM • Jun 11, 2025
Why it matters: Whoever controls the clean, well-labeled data controls the next wave of foundation models. If Meta locks up Scale’s capacity, rivals may scramble for alternative pipelines.
OpenAI rolled out an update that makes ChatGPT’s voice responses far more natural, adding expressive intonation and seamless code-switching between languages.
The details:
New neural TTS stack captures micro-pauses and vocal fry, reducing “robotic” artifacts.
Speaking rate adapts to listener feedback in real time.
Early testers report the model can switch from English to Spanish mid-sentence without latency.
The upgrade is live for Plus and Enterprise users, with an API coming later this month.
Haven’t tried the updated Advanced Voice that was recently launched to all paid users in ChatGPT? Then take a listen below.
Prompt: Wish me an awkward happy birthday.
— OpenAI (@OpenAI)
8:01 PM • Jun 9, 2025
Why it matters: Voice will be the default interface for AI assistants. More human-sounding bots lower friction for adoption—and raise fresh questions about disclosure and deep-fake abuse.
Apple's annual Worldwide Developers Conference (WWDC) showcased 13 significant advancements, including a revolutionary Liquid Glass design, enhanced multitasking capabilities for iPads, and substantial improvements in AI-powered translation.
The details:
The new Liquid Glass design is rumored to be integrated into upcoming iPhone and Apple Watch models, offering increased durability and a sleeker aesthetic.
iPadOS will feature a redesigned multitasking interface, allowing for more intuitive app switching and improved productivity.
AI-powered translation services across Apple's ecosystem are expected to see significant accuracy and speed improvements, particularly in real-time conversations.
Further details on developer tools and frameworks for these new features are anticipated to be released throughout the conference.
Expressive. Delightful. But still instantly familiar.
Introducing our new software design with Liquid Glass.
— Tim Cook (@tim_cook)
7:59 PM • Jun 9, 2025
Why it matters: Apple's continued focus on design innovation, user experience enhancements, and integrated AI capabilities signals its commitment to staying competitive in the rapidly evolving tech landscape. These advancements could set new industry standards for hardware design and intelligent software.
Anthropic quietly published new benchmarks showing its flagship Claude 4 Opus surpassing GPT-4o on legal reasoning tasks.
The details:
Scored 91 % on the BARBRI bar exam simulator versus GPT-4o’s 86 %.
Uses a 350-K token context window, double the previous 4o limit.
Anthropic credits a new “constitutional down-scaling” safety method that trims hallucinations by 37 %.
Opus and the cheaper Sonnet tier are now generally available via Amazon Bedrock.

Why it matters: Enterprise buyers crave transparent safety rails as models move into regulated domains. Strong legal-reasoning scores bolster Anthropic’s pitch to law firms and compliance teams.
Emotion AI startup Hume released EVI 3, claiming state-of-the-art recognition of 53 distinct affective states and dynamic vocal mirroring.
The details:
Trained on 70 000 hours of consented, labeled conversation data.
Beats GPT-4o by 18 F1 points on the EmpatheticDialogues benchmark.
SDK lets developers inject “empathetic mode” into any call-center stack with two API lines.
Early adopters include IKEA’s customer support and the mental-health app Wysa.
Meet EVI 3, another step toward general voice intelligence.
EVI 3 is a speech-language model that can understand and generate any human voice, not just a handful of speakers. With this broader voice intelligence comes greater expressiveness and a deeper understanding of tune,
— Hume (@hume_ai)
5:24 PM • May 29, 2025
Why it matters: As AI agents handle more sensitive tasks—from therapy triage to sales calls—emotional intelligence could become a key competitive moat.
Enjoyed this issue? Forward it to a friend and let us know what you’d like to see next.
~ 1Node AI
Reply