AI News
Latest artificial intelligence developments and insights

ElevenLabs and Google Lead in New Speech-to-Text Benchmark Rankings
In the latest release of Artificial Analysis' AA-WER speech-to-text benchmark version 2.0, ElevenLabs' Scribe v2 has emerged as the top performer with an impressively low word error rate of 2.3%. Following closely, Google's Gemini 3 Pro achieved a 2.9% error rate, while Mistral's Voxtral Small recorded a 3.0% error rate. Not too far behind are Google's Gemini 3 Flash at 3.1% and ElevenLabs' earlier model, Scribe v1, at 3.2%. An interesting aspect of Google's results is that Gemini was not specifically trained for transcription tasks, yet it excelled due to its versatile multimodal capabilities. Meanwhile, OpenAI's widely-used open-source tool, Whisper Large v3, sits in the middle of the pack with a 4.2% error rate. Lagging behind are Alibaba's Qwen3 ASR Flash at 5.9%, Amazon's Nova 2 Omni at 6.0%, and Rev AI at 6.1%. When it comes to the AA-AgentTalk test, which evaluates speech directed at voice assistants, ElevenLabs' Scribe v2 and Google's Gemini 3 Pro once again lead with error rates of 1.6% and 1.7%, respectively. AssemblyAI's Universal-3 Pro follows in third place with a 2.3% error rate. These results highlight the dominance of ElevenLabs and Google in both general speech-to-text and voice assistant-specific benchmarks.

OpenAI Shares Insights on Pentagon Agreement Amid Criticism
OpenAI's recent collaboration with the Department of Defense has sparked significant discussion, with CEO Sam Altman acknowledging the deal was "definitely rushed" and potentially problematic in appearance. This comes after Anthropic's negotiations with the Pentagon fell apart, leading to President Donald Trump instructing federal agencies to phase out Anthropic's technology over six months. Secretary of Defense Pete Hegseth labeled Anthropic as a supply-chain risk, setting the stage for OpenAI's swift announcement of its own agreement to deploy models in classified settings. Both Anthropic and OpenAI have expressed firm boundaries against the use of their technologies in autonomous weaponry and extensive domestic surveillance. However, questions have arisen over OpenAI's transparency regarding its safety measures and how it succeeded where Anthropic did not. In response, OpenAI published a detailed blog post outlining its position. The post highlighted that OpenAI's models would not be used for mass domestic surveillance, autonomous weapon systems, or high-stakes automated decisions like social credit systems. Unlike other AI companies that may have weakened their safety protocols, OpenAI claimed to maintain robust safeguards through a comprehensive, layered approach. The blog emphasized, "We retain full discretion over our safety stack, deploy via cloud, involve cleared OpenAI personnel, and uphold strong contractual protections," alongside existing legal protections in the U.S. Following the blog's publication, Techdirt's Mike Masnick critiqued the agreement, suggesting it could enable domestic surveillance by adhering to Executive Order 12333, which he argued allows the NSA to conduct surveillance by intercepting communications outside the U.S., even if they include information on U.S. persons. Katrina Mulligan, OpenAI's head of national security partnerships, addressed these concerns on LinkedIn, stating that the debate often overlooks the importance of deployment architecture over contract language. She explained that limiting deployment to cloud API ensures models cannot be directly integrated into weapons or operational hardware. On social media platform X, Altman further explained the rushed nature of the deal and acknowledged the backlash OpenAI faced, noting Anthropic's Claude surpassing OpenAI's ChatGPT in Apple's App Store rankings. Altman justified the agreement by expressing a desire to de-escalate tensions between the Department of Defense and the tech industry, hoping the deal would ultimately prove beneficial. "If we are right," Altman said, "this could lead to a de-escalation and show our commitment to helping the industry, despite the initial criticism."

Google Partners with Airtel to Combat RCS Spam in India with Network-Level Integration
In response to ongoing issues with spam affecting its Rich Communication Services (RCS) in India, Google is enhancing its partnership with local telecom operators to improve security measures. On Sunday, Bharti Airtel, India’s second-largest telecom provider with over 463 million subscribers, announced a collaboration with Google, aiming to integrate its network-level spam filtering into the RCS system within the country. This initiative seeks to enhance protection against unsolicited messages and fraud, according to statements from both companies. India, known for its massive mobile user base and rapid digital payments growth, presents significant challenges in controlling spam and fraudulent activities in messaging platforms. In 2022, the volume of complaints over unsolicited advertisements on Google's RCS, mainly via the Google Messages app, led the company to temporarily halt business promotions on the platform in India. Despite these efforts, some users still report spam issues, indicating the problem persists. Airtel has been cautious about aligning more closely with Google’s RCS until it could route message traffic through its own spam filtering systems, citing concerns over rising fraud risks. "We had not onboarded Google because we first wanted RCS messages to be routed through the Airtel spam filter," an Airtel spokesperson noted. The partnership will leverage Airtel’s network intelligence alongside Google’s RCS platform to conduct real-time checks on business messages, including verifying senders, detecting spam, and enforcing do-not-disturb preferences. Airtel described this collaboration as a “global first” in integrating a telecom operator’s spam filtering technology directly into an over-the-top messaging service. Google has expressed a commitment to working with other telecom operators globally to create a consistent and secure messaging experience for RCS users. Sameer Samat, president of the Android ecosystem at Google, emphasized the potential for this model to expand beyond India and standardize security across the RCS ecosystem. India plays a crucial role in Google's messaging endeavors, with over a billion internet users and more than 700 million smartphones in use. The country also boasts over 853 million WhatsApp users, highlighting the fierce competition in mobile messaging. Prabhu Ram, vice president for industry research at CyberMedia Research, commented that the integration with carriers represents a strategic move to address longstanding vulnerabilities in rich messaging platforms. The success of this partnership is expected to be measured by reductions in spam volume, user complaints, and incidents of fraud, as well as increased engagement with legitimate messages. Over the past year, Airtel reports that its AI-driven systems have blocked over 71 billion spam calls and 2.9 billion spam messages, contributing to a 69% reduction in fraud-related financial losses on its network. On a broader scale, Google has been promoting RCS as the successor to SMS, with the service handling more than a billion messages daily in the U.S. as of May 2025, according to a 28-day average. However, Google has not yet disclosed whether similar carrier integrations are planned for markets outside India or estimated the potential impact on reducing spam and fraud.

Massive AI Infrastructure Investments Fueling Industry Surge
The rapid development of artificial intelligence is not just about innovative software—it's heavily reliant on the massive infrastructure needed to support AI's computational demands. As tech giants race to integrate and expand AI capabilities, they simultaneously embark on an unprecedented push to construct the necessary infrastructure. Nvidia's CEO, Jensen Huang, recently projected that the AI infrastructure sector could see investments ranging from $3 trillion to $4 trillion by the decade's end, primarily fueled by AI enterprises. This intense demand is straining power grids and pushing construction limits across the industry. Here's a closer look at some of the most significant AI infrastructure projects, with substantial contributions from Meta, Oracle, Microsoft, Google, and OpenAI. Microsoft revolutionized the AI landscape in 2019 with a $1 billion investment in OpenAI, a non-profit known for its ties to Elon Musk. This partnership made Microsoft the exclusive cloud provider for OpenAI, driving a shift towards Azure cloud credits. Over time, Microsoft's investment grew to nearly $14 billion, setting the stage for substantial returns once OpenAI transitions to a for-profit model. However, the exclusivity of this partnership has waned; OpenAI now explores other cloud providers while maintaining Microsoft as a preferred choice. This strategic partnership has inspired others in the industry. For example, Anthropic secured $8 billion from Amazon, integrating unique modifications on Amazon's hardware to optimize AI training. Meanwhile, Google Cloud has formed alliances with smaller AI firms like Lovable and Windsurf, although these partnerships don't involve direct investments. OpenAI has also received a $100 billion boost from Nvidia, enhancing its GPU acquisition capabilities. Oracle has emerged as a formidable player in AI infrastructure. On June 30, 2025, Oracle disclosed a $30 billion cloud services agreement with OpenAI, a move that propelled its stock upwards. This was followed by a staggering $300 billion deal for compute power starting in 2027, briefly elevating Oracle's Larry Ellison to the position of the world's wealthiest person. This agreement anticipates significant growth and positions Oracle as a leading force in AI infrastructure. Nvidia, central to AI labs' infrastructure needs, has been reinvesting its cash influx creatively. In September 2025, Nvidia acquired a 4% stake in Intel for $5 billion and announced a $100 billion investment in OpenAI using GPUs for data center projects. Similar arrangements have been made with Elon Musk's xAI and AMD, keeping Nvidia's GPUs in high demand. Companies like Meta face complex challenges as they expand their legacy infrastructure. Meta plans to invest $600 billion in U.S. infrastructure by 2028, with significant portions allocated to new data centers. The Hyperion site in Louisiana, costing $10 billion, will generate 5 gigawatts of compute power, supported by a local nuclear plant. In Ohio, the Prometheus site will leverage natural gas, set to launch in 2026. Such expansions raise environmental concerns, as seen with xAI’s Tennessee data center, which reportedly violates the Clean Air Act. In a bold move, President Trump announced the 'Stargate' project, a $500 billion AI infrastructure initiative involving SoftBank, OpenAI, and Oracle. While hyped as the largest AI infrastructure endeavor, doubts about funding have tempered enthusiasm. Nevertheless, construction of eight data centers in Texas is underway, with completion expected by 2026. The surge in capital expenditures reflects tech companies' commitment to AI infrastructure. Amazon leads with a $200 billion projection for 2026, followed by Google's $175-185 billion estimate. Meta plans to invest $115-135 billion, though some projects remain off-books. Collectively, tech giants aim to allocate nearly $700 billion to data center projects in 2026 alone. Although investor caution grows, tech companies remain steadfast, viewing AI infrastructure as critical to their future. This article was originally published on September 22.

Anthropic’s Claude Climbs to Second Spot in App Store Amid Pentagon Negotiations Drama
Anthropic's AI chatbot, Claude, has surged in popularity on Apple's US App Store, reaching the number two spot among free apps. This rise follows intense media coverage of Anthropic's contentious negotiations with the Pentagon. According to CNBC, Claude now stands just behind OpenAI’s ChatGPT, which holds the top position, and ahead of Google Gemini, which sits in third place. Data from SensorTower reveals that Claude had been hovering just outside the top 100 at the end of January but has maintained a position within the top 20 throughout February. The app's ranking saw a significant boost this week, moving up from sixth on Wednesday, to fourth on Thursday, and eventually claiming the second spot on Saturday. The surge in Claude’s popularity coincides with Anthropic's efforts to ensure its AI models are not used by the Department of Defense for mass domestic surveillance or fully autonomous weapon systems. These negotiations led to a directive from President Donald Trump instructing federal agencies to cease using all Anthropic products. Additionally, Secretary of Defense Pete Hegseth labeled the company a supply-chain threat. Meanwhile, OpenAI announced its own deal with the Pentagon, with CEO Sam Altman stating that their agreement includes provisions concerning domestic surveillance and autonomous weaponry.

Anthropic's AI Dilemma: A Self-Made Crisis?
On Friday afternoon, as I began an interview, a news alert appeared on my screen: the Trump administration announced it was cutting off ties with Anthropic, a San Francisco-based AI company established by Dario Amodei in 2021. Defense Secretary Pete Hegseth quickly cited a national security law to blacklist Anthropic from Pentagon contracts, following Amodei's refusal to allow the use of Anthropic's AI for mass surveillance of U.S. citizens or for autonomous armed drones capable of selecting and eliminating targets without human intervention. This unexpected series of events could cost Anthropic a potential $200 million contract and lead to its exclusion from future defense collaborations. President Trump posted on Truth Social, instructing federal agencies to "immediately cease all use of Anthropic technology." In response, Anthropic has declared its intention to legally challenge the Pentagon's decision. For nearly a decade, Max Tegmark, an MIT physicist and founder of the Future of Life Institute, has cautioned that the rapid advancement of AI technology is surpassing global regulatory capabilities. In 2023, he helped organize an open letter, signed by over 33,000 individuals, including Elon Musk, calling for a pause in advanced AI development. Tegmark views Anthropic's current predicament as largely self-inflicted, stemming from an industry-wide resistance to regulation. Companies like Anthropic, OpenAI, Google DeepMind, and others have long claimed to self-regulate, but Anthropic recently abandoned a key safety pledge, which promised not to release increasingly powerful AI without confidence in its safety. In the absence of clear regulations, these companies find themselves vulnerable, Tegmark argues. During a recent interview, he expressed his thoughts on Anthropic's situation, noting how the initial excitement about AI's potential to advance healthcare and strengthen America has shifted to governmental disputes over its use for surveillance and autonomous weaponry. Anthropic has built its brand on being a safety-first AI company, despite collaborating with defense and intelligence sectors since at least 2024. Tegmark finds this contradictory, suggesting that while companies claim to prioritize safety, none have actively supported binding regulations akin to those in other industries. They have, in fact, backtracked on their promises: Google abandoned its "Don't be evil" motto, OpenAI removed "safety" from its mission statement, and Anthropic dropped its safety commitment. These companies have consistently lobbied against AI regulation, advocating for self-governance. Tegmark points out the irony that there is more regulation on sandwich shops than on AI systems, as companies resisted turning voluntary safety commitments into enforceable laws. This regulatory vacuum could lead to disastrous outcomes, comparable to historical corporate malpractices. The AI industry's common justification is the competitive race with China. However, Tegmark argues this is flawed, as China is taking measures to restrict AI developments that it perceives as harmful. He challenges the notion that AI superintelligence is an asset, suggesting it poses a national security threat if uncontrollable. The rapid pace of AI development has surprised many experts, with systems advancing faster than anticipated. Tegmark warns that this swift progression could soon impact job markets. As Anthropic faces its current challenges, the response from other AI giants remains uncertain. OpenAI's Sam Altman has publicly supported Anthropic's stance, while Google and xAI have yet to comment. Despite the current turmoil, Tegmark sees potential for a positive outcome if AI companies are subjected to the same regulations as other industries, requiring proof of safety before deploying powerful technologies. Such measures could usher in a new era of AI innovation free from existential concerns, although this is not the current trajectory.

Anthropic Challenges US Military's Supply Chain Risk Designation
In a surprising turn of events, the US Secretary of Defense, Pete Hegseth, has branded AI company Anthropic as a 'supply-chain risk,' a decision that has sent ripples through the tech industry. This designation, announced via social media, prohibits any military-associated contractors, suppliers, or partners from engaging in business with Anthropic. The move follows intense discussions between the Pentagon and the AI firm regarding the military's usage of Anthropic's AI technology. Anthropic has publicly opposed the Pentagon's stance, particularly its insistence on unrestricted military use of AI, including for mass surveillance and autonomous weaponry. In response, Anthropic has vowed to legally contest the supply-chain risk label, arguing that it sets a troubling precedent for US companies negotiating government contracts. The firm insists it has not received direct communication from the Department of Defense or the White House related to these negotiations. The supply-chain risk designation is a protective measure used by the Pentagon to exclude vendors posing security threats, which may include foreign influences. However, Anthropic claims that Secretary Hegseth lacks the legal authority to enforce such a sweeping prohibition on doing business with the military. The Pentagon has refrained from commenting on the issue. The announcement has sparked a backlash from Silicon Valley, with industry leaders criticizing the decision as impulsive and damaging. Paul Graham of Y Combinator and Boaz Barak from OpenAI have both expressed concerns about the implications of restricting a leading AI company like Anthropic. Meanwhile, OpenAI CEO Sam Altman revealed a separate agreement with the Department of Defense, emphasizing shared principles against domestic surveillance and autonomous weapon systems. Uncertainty looms for Anthropic's customers as experts in federal contracts suggest ambiguity over which clients might need to sever ties with the company. Legal and tech industry professionals are closely examining the situation, likening it to other regulatory measures such as Section 889 of the National Defense Authorization Act, which restricts contracts with companies using specific foreign tech components. This developing situation highlights ongoing tensions between tech companies and government regulations, potentially discouraging future collaborations with the Pentagon. Legal experts predict Anthropic will pursue litigation, which could drag on, disrupting their business operations and affecting partnerships with major players like Nvidia, Amazon, and Google. Until further clarification from the Department of Defense, companies remain in limbo, cautiously assessing their legal standing and next steps.

OpenAI Secures Pentagon AI Deal Following Anthropic's Federal Tech Ban
In a significant development, OpenAI has inked a deal with the United States Department of Defense to deploy its artificial intelligence models on classified networks, allowing all legal applications under the agreement. This partnership was finalized shortly after President Trump directed federal agencies to discontinue the use of technology from Anthropic, a competing AI firm. While Anthropic resisted allowing its technology to be used for mass surveillance and autonomous weapons, OpenAI's CEO, Sam Altman, agreed to permit all lawful uses. However, he ensured the inclusion of technical safeguards in the contract. According to Altman, the Department of Defense showed a "deep respect for safety" and expressed a desire to partner for optimal outcomes. The timing of OpenAI's agreement is notable, as it comes on the heels of failed negotiations between Anthropic and the Pentagon. Anthropic was negotiating a $200 million contract but refused to relax its restrictions on the use of its AI for surveillance and weaponry, which the Pentagon deemed unacceptable. The deadline for an agreement passed without resolution, prompting Defense Secretary Pete Hegseth to label Anthropic a "supply chain risk to national security." President Trump criticized Anthropic on Truth Social, declaring, "WE will decide the fate of our country — NOT some out-of-control, Radical Left AI company." In contrast, OpenAI adopted a different strategy, negotiating terms that allowed for all legal uses while incorporating safeguards. Altman stated that OpenAI's models would operate exclusively on cloud networks, avoiding deployment in edge environments like autonomous weapon systems. OpenAI is committed to embedding its engineers alongside government personnel for classified projects to ensure system security. Altman also urged the Pentagon to offer similar terms to other AI companies, suggesting a broad, industry-wide acceptance of these conditions. OpenAI's approach appears to have secured a deal without engaging in the political contention that undermined Anthropic's efforts. Altman emphasized the importance of doing the "right thing" rather than taking an easy path that might appear strong but lacks sincerity. The distinctions between OpenAI and Anthropic's stances could have significant implications. Anthropic insisted on "no fully autonomous weapons without human oversight," requiring active human involvement before deploying weapons. In contrast, Altman speaks of "human responsibility for the use of force," a more flexible concept that could apply after deployment. Anthropic also argued that current AI models are not reliable enough for use in fully autonomous weapons, positing that this could endanger both military personnel and civilians. Their stance on domestic mass surveillance also raises questions about the degree of involvement AI models have in analyzing pre-collected data. In response to the ban, Anthropic plans to challenge its "supply chain risk" designation in court, maintaining its opposition to mass surveillance and autonomous weapons. They stated, "No amount of intimidation or punishment from the Department of War will change our position." This situation highlights the diverging paths of two major AI companies in relation to government contracts and national security considerations.

Anthropic Challenges Pentagon's Supply Chain Risk Designation in Court
Anthropic has announced plans to legally contest the U.S. Department of Defense’s recent decision to label the AI company as a supply chain risk. This designation, typically used for foreign threats, was introduced by Secretary of Defense Pete Hegseth. Anthropic criticizes this classification as unlawful and intends to "challenge any supply chain risk designation in court," arguing that it sets a perilous precedent for American businesses engaging with government entities. Furthermore, Secretary Hegseth suggested that military suppliers should cease conducting business with Anthropic. However, Anthropic counters that this move lacks a legal foundation. The classification, as outlined in 10 USC 3252, pertains solely to direct contracts with the Department of Defense involving Anthropic's AI model, Claude. It does not affect private clients, commercial agreements, or usage via the API or claude.ai. The dispute stems from a breakdown in negotiations. Anthropic previously declined to make Claude available for mass domestic surveillance and fully autonomous weapons, citing the unreliability of current AI models and concerns over fundamental rights violations associated with mass surveillance. Following Anthropic's refusal, OpenAI stepped in to secure the agreement. The conflict highlights the tension between technological companies and government demands, as well as the broader implications for AI usage in defense and surveillance.

Anthropic CEO Resists Pentagon's Demands as Deadline Approaches
In a bold move, Anthropic CEO Dario Amodei has firmly rejected the Pentagon's demand for unrestricted access to the company's artificial intelligence systems. On Thursday, Amodei declared that he could not, in good conscience, comply with the military's request. He emphasized that while the Department of Defense is responsible for military decisions, certain applications of AI could potentially undermine democratic principles rather than support them. Amodei highlighted two specific areas of concern: the potential for mass surveillance of American citizens and the deployment of fully autonomous weapons systems without human oversight. In contrast, the Pentagon insists on using Anthropic’s AI for any lawful purpose, arguing that a private entity should not dictate its operational use. This declaration comes as a critical deadline looms—Defense Secretary Pete Hegseth has given Anthropic until Friday at 5:01 p.m. to comply or face serious repercussions. The Department of Defense is considering labeling Anthropic as a supply chain risk, a designation typically reserved for foreign threats, or invoking the Defense Production Act to compel the company to align with its objectives. This act empowers the president to require companies to prioritize national defense needs. Amodei pointed out the inconsistency in these threats, noting, "One labels us a security risk; the other considers our technology vital to national security." While acknowledging the department's prerogative to select contractors that align with its goals, Amodei expressed hope that Anthropic’s valuable technological contributions would encourage the Pentagon to reconsider its stance. Currently, Anthropic is the sole AI lab equipped with classified-ready systems for military use, though there are reports that the Department of Defense is preparing xAI to assume this role. "Our strong preference is to continue supporting the Department and its personnel, provided our two requested safeguards are honored," Amodei stated. He further assured that if the Department decides to terminate its partnership with Anthropic, they would ensure a seamless transition to another provider to prevent any disruption to critical military operations and planning. In essence, Amodei conveyed that parting ways can be handled amicably without hostility.

Jack Dorsey Drastically Reduces Block's Workforce; Predicts Industry-Wide Trend
Jack Dorsey has long expressed admiration for Elon Musk, and now it appears he may be following in his footsteps. On Thursday, Dorsey revealed a significant reduction in the workforce at Block, the financial services company he established, which includes Square, Cash App, and Tidal. The company will let go of over 4,000 employees, decreasing its global workforce from more than 10,000 to just under 6,000. This decision was met with approval from investors, pushing Block's stock up by over 24% in after-hours trading. This move mirrors a similar strategy employed by Musk when he acquired Twitter in November 2022, cutting about half of its staff. Dorsey, who was a substantial shareholder in Twitter, witnessed this dramatic restructuring firsthand. His unique relationship with Musk has seen fluctuating support and criticism, especially concerning Musk's Twitter acquisition and Dorsey's subsequent launch of Bluesky, a decentralized alternative to Twitter. Dorsey insists that the layoffs are a strategic decision rather than a financial necessity, although those affected might see it differently. He stated that frequent layoffs damage morale and trust among employees, customers, and investors. "I'd rather reach this point proactively and under our terms than be forced into it later," he remarked on X. He believes that other companies will soon follow suit within a year. The official rationale for the cuts is to enhance efficiency with the help of AI, according to Block's CFO Amrita Ahuja. The aim is to operate with smaller, highly skilled teams that leverage AI to automate tasks. In terms of severance, U.S.-based employees affected by the layoffs are promised 20 weeks of salary plus an additional week for each year of service, equity vested through the end of May, six months of healthcare, corporate devices, and $5,000 to assist during the transition. Employees outside the U.S. will receive comparable support based on their local regulations. Block is not alone in this trend. Companies such as Salesforce and Amazon have also enacted large-scale layoffs, attributing them to efficiencies gained through AI. However, a recent Forrester Research report suggests that many of these layoffs might be financially motivated rather than driven by actual technological advancements.

Anthropic Stands Its Ground Against Pentagon on AI Usage Policies as Deadline Nears
Anthropic's CEO, Dario Amodei, has reiterated the company's unwavering stance against the Pentagon's use of its AI technology for mass domestic surveillance and fully autonomous weapons systems. As the deadline approaches, Amodei emphasizes that current AI systems, while integrated into defense and intelligence operations, are not reliable enough to replace human decision-making entirely. The Pentagon, however, insists that existing laws and guidelines are adequate and has resisted providing further written assurances. In a public escalation, Chief Technology Officer Emil Michael criticized Amodei, accusing him of dishonesty and arrogance. Amodei highlights Anthropic's pioneering role in deploying AI models within government and national security sectors. Despite widespread use of their AI model Claude in intelligence and mission planning, Amodei maintains that removing humans from critical military decision loops is premature due to AI's current limitations. Anthropic has offered to collaborate with the Pentagon on enhancing AI reliability, but these overtures have been declined. On the issue of domestic surveillance, Amodei warns of AI's potential to compile detailed profiles from disparate data points on a large scale. He also points out the contradiction in the Pentagon's stance: labeling Anthropic a security risk while simultaneously invoking the Defense Production Act to deem it essential to national security. Despite the looming deadline, Anthropic remains resolute. The company is prepared to ensure a seamless transition if the Pentagon decides to drop its systems. Anthropic asserts that it has sacrificed substantial revenue by severing ties with Chinese firms connected to the Communist Party and is advocating for strict chip export controls, which also strategically disadvantage Chinese competitors. In response to the impasse, Pentagon technology chief Emil Michael claims the military has offered significant concessions, such as acknowledging laws against domestic surveillance and offering Anthropic a role on an AI ethics board. However, Anthropic finds these concessions inadequate. The Pentagon's refusal to provide explicit guarantees against using Anthropic's AI for mass surveillance or autonomous weapons is justified by existing laws and policies, according to Michael. He stresses the importance of preparing for potential AI advancements by nations like China. As the deadline approaches, legal expert Alan Z. Rozenshtein explains the potential implications of the Defense Production Act. The law allows the government to compel companies to fulfill national defense needs, but the scope of this authority depends on the specific demands made by the Pentagon. If the Pentagon demands the use of Claude without its current usage restrictions, they might have a strong case. However, if it requires retraining Claude or removing safety features, the legal ground becomes tenuous, potentially raising First Amendment issues. Rozenshtein echoes Amodei's point about the contradiction in the Pentagon's approach, noting that treating Anthropic as both a security risk and an essential defense asset is inconsistent. If Anthropic resists compliance, it risks facing legal consequences, though it's likely the company would challenge any such order in court.

Gushwork Leverages AI for Customer Acquisition, Sees Promising Early Results
In the evolving landscape of online business discovery, Gushwork, an India-based startup, is making strides in capturing customers through AI-powered platforms such as ChatGPT, Gemini, and Perplexity. The company is beginning to attract investor interest with its initial successes. On Thursday, Gushwork announced it had secured $9 million in a seed funding round led by Susquehanna International Group (SIG) and Lightspeed, with contributions from B Capital, Seaborne Capital, Beenext, Sparrow Capital, and 2.2 Capital. This investment values Gushwork at $33 million post-funding, a significant increase from its $7.5 million valuation after a $2.1 million pre-seed round led by Lightspeed in July 2023, according to sources familiar with the details. With this new funding, Gushwork’s total financial backing reaches $11 million. As AI firms like OpenAI and Perplexity are beginning to transform traditional web search, even prompting giants like Google to implement AI-generated features, Gushwork sees an opportunity. The startup aims to help businesses become more visible in AI-driven discovery pathways by employing automated marketing agents. Founded in 2023 by Nayrhit Bhattacharya and Adithya Venkatesh, Gushwork initially targeted small and medium-sized enterprises by offering a blend of AI and human expertise to streamline outsourcing processes. However, the company soon shifted its focus towards search-led marketing due to rising demand for enhanced online visibility. Gushwork's platform leverages AI agents to craft and update search-optimized content, establish backlinks through a network of 200 to 300 partner sites, and monitor inbound leads with an integrated content management system. Bhattacharya states the platform's aim is to boost business presence in both traditional and AI-generated search results without needing large in-house marketing teams. Currently, Gushwork boasts over 300 paying clients, with approximately 95% based in the U.S., and subscription costs starting at $800 monthly. The company reports an annualized recurring revenue of $1.5 million, following the launch of its AI search-focused product three months ago, and projects reaching $3 million to $3.5 million in ARR within the next quarter. Bhattacharya notes the startup’s impressive growth rate of 50% to 80% month-over-month. Around 20% of Gushwork’s clients' website traffic originates from AI-driven search and chat platforms, yet these channels account for roughly 40% of inbound leads. These high-intent leads are already delivering tangible business outcomes for several clients. For instance, a professional services client has reportedly secured contracts worth $200,000 to $350,000 post-adoption of Gushwork’s platform, though the client’s identity remains undisclosed. Many users are witnessing significant growth in their sales pipelines as AI-driven discovery gains traction. Gushwork’s clientele primarily consists of high-ticket B2B service providers, industrial distributors, and contract manufacturers, especially in the U.S. The average subscription fee ranges from $800 to $900 monthly, equating to about $9,000 to $10,000 annually. As AI-driven discovery continues to evolve, tools like generative AI chatbots and AI web browsers are becoming vital in vendor and product research. In July 2025, OpenAI reported that ChatGPT received around 2.5 billion daily prompts worldwide, with 330 million from U.S. users alone. Bhattacharya believes this trend is reshaping online visibility strategies for businesses. With the fresh funding, Gushwork plans to expand its engineering team, enhance AI model accuracy, and scale its market outreach. The company has over 800 businesses on its waitlist, which it intends to start onboarding soon. Headquartered in Delaware with an office in Bengaluru, Gushwork employs approximately 70 staff in India, alongside several contractors.

Salesforce CEO Marc Benioff Dismisses 'SaaSpocalypse' Fears with Strong Earnings and AI Strategy
In an effort to reassure investors about its future amidst the rise of AI, Salesforce unveiled impressive fourth-quarter earnings, reporting a revenue of $10.7 billion—marking a 13% increase compared to the previous year. The annual revenue hit $41.5 billion, up 10% from last year, bolstered by its $8 billion acquisition of data management firm Informatica in May. Additionally, Salesforce's net income reached $7.46 billion, with promising guidance for the upcoming year forecasting revenue between $45.8 billion and $46.2 billion, reflecting a 10% to 11% growth. The company's "remaining performance obligation" (RPO), which indicates contracted revenue not yet realized, soared to over $72 billion. Despite these robust numbers, Salesforce and other Software-as-a-Service (SaaS) companies have faced pressure due to fears that AI innovations could disrupt their business models that rely on per-employee subscriptions. This has led to talks of a "SaaSpocalypse." During the earnings call, CEO Marc Benioff addressed these concerns head-on, using the term "SaaSpocalypse" multiple times, humorously noting, "If there is a SaaSpocalypse, it may be eaten by the Sasquatch because there are a lot of companies using a lot of SaaS because it just got better with agents." To bolster investor confidence, Salesforce announced a nearly 6% increase in its dividend to $0.44 per share and launched a new $50 billion share buyback program, which is often favored by shareholders as it can enhance the stock price by reducing the number of shares available. In an unusual move, the earnings call was transformed into a mix of podcast, infomercial, and traditional Q&A session, featuring interviews with key Salesforce clients. The CEOs of SharkNinja, Wyndham Hotels and Resorts, and SaaStr all praised Salesforce's new AI-driven solutions. The company introduced a new metric for its AI products—agentic work units (AWU)—which aims to measure the actual completion of tasks by AI agents, rather than just processing volume. Furthermore, Salesforce outlined its vision for AI's role within its platform, positioning its SaaS software as the dominant player in the tech stack, with AI models functioning as commoditized engines. This stance contrasts with OpenAI, whose release of its enterprise agent Frontier suggests a different tech stack hierarchy. In a final touch of flair, Benioff donned a black leather jacket reminiscent of Nvidia CEO Jensen Huang's iconic style, signaling confidence in Salesforce's place in the AI revolution.

Alibaba Unveils Cost-Effective Qwen 3.5 Models to Compete with GPT-5 Mini and Claude Sonnet 4.5
In an ambitious move, Alibaba has introduced its Qwen 3.5 series, setting a competitive stance against renowned models like GPT-5 mini and Claude Sonnet 4.5, but at a significantly lower cost. Unveiled on February 26, 2026, the Qwen 3.5 lineup consists of four distinct models: Qwen3.5-Flash, Qwen3.5-35B-A3B, Qwen3.5-122B-A10B, and Qwen3.5-27B. These models promise enhanced performance while requiring less computational power, a feat that could redefine efficiency in AI development. Each of these models is capable of processing text, images, and video inputs to generate text outputs. Notably, the compact Qwen3.5-35B-A3B model surpasses its larger predecessor, Qwen3-235B-A22B, underscoring the importance of advanced architecture, superior data quality, and reinforcement learning over sheer model size. Meanwhile, the 122B and 27B models are designed to bridge the performance gap, especially in complex agent scenarios. Alibaba claims that the Qwen 3.5 models either match or exceed the capabilities of leading Western models, such as OpenAI's GPT-5 mini, gpt-oss-120b, and Anthropic's Claude Sonnet 4.5, across a range of benchmarks. These models are accessible on platforms like Hugging Face, ModelScope, and Qwen Chat, under the Apache License 2.0. This open-source license permits commercial use, modification, and redistribution, broadening the models' appeal and usability. The Qwen3.5-Flash model, designed for hosted production, features a context length of one million tokens and includes built-in tools. Its API pricing is structured at $0.10 per million input tokens and $0.40 per million output tokens, making it an attractive option for developers and businesses alike.

Ailias Brings Historical Figures to Life with Hologram Avatars
Imagine hosting a dinner party with historical icons like Aristotle or Isaac Newton. Ailias, a company in Surrey, UK, aims to turn this fantasy into reality by offering 3D hologram avatars of legendary figures. These interactive, knowledgeable avatars can be delivered straight to your doorstep, packaged and ready for conversation. While holographic technology isn't new, Ailias distinguishes itself by focusing on educational and historical characters rather than mere spectacle, describing their work as 'ultra character creation.' Ailias’ avatars aren't just static displays; they can perform actions like juggling and dancing, adding a unique flair to any event. Pricing varies depending on the duration and customization of the hologram rental or purchase. A visit to their office revealed that a week's rental could cost several thousand pounds, which includes software, delivery, and installation. Their selection includes over 70 characters, featuring the likes of Henry VIII and Cleopatra. The focus on historical figures helps Ailias avoid legal issues associated with using modern personalities, as this practice could lead to trademark disputes. During a demonstration, I conversed with a holographic Albert Einstein, who responded swiftly and accurately to a variety of topics. The technology relies on open-source AI and third-party video generation, creating a realistic, if not entirely authentic, interaction. The avatars are designed for fun and education, rather than creating a perfect impersonation of the past. Ailias also offers bespoke holograms, allowing brands or individuals to commission custom avatars. This can be a creative marketing tool or a personal project, though ethical guidelines are in place to prevent misuse. The potential applications of holograms are vast, from enhancing brand awareness to serving as AI concierges in hotels. The ability to create personal avatars could be both entertaining and controversial. Ultimately, Ailias is pushing the boundaries of how we interact with historical figures through innovative holographic technology. Whether for educational purposes or personal enjoyment, these avatars offer an engaging way to bring the past into the present.

Access Claude Code Sessions from Any Device Anywhere
As of February 25, 2026, Claude Code users can now seamlessly continue their programming sessions from any device, including smartphones, tablets, or web browsers. This new feature allows sessions to remain active on the user's local machine, ensuring that no data is transferred to the cloud. Users can maintain access to local files, servers, and project settings without interruption. By connecting via claude.ai/code or through the Claude app available on iOS and Android, users can effortlessly switch between terminal, browser, and mobile phone. Should the network connection drop, the session will automatically attempt to reconnect, although it will terminate after approximately ten minutes of offline status. Initially, this feature is being rolled out as a research preview for Max subscribers, with Pro users expected to gain access soon after. Unlike the web-based Claude Code, which has been utilizing Anthropic's cloud infrastructure since last year, these remote control sessions are fully hosted on the user's personal computer. Anthropic is actively enhancing Claude Code by integrating automated code reviews and GitHub functionalities. The company is also in the process of a significant funding round, aiming to raise $10 billion with a valuation of $350 billion. According to inventor Boris Cherny, the new Claude Cowork tool was predominantly developed using Claude Code itself.

Anthropic Stands Firm Against Pentagon's AI Policy Demands Amid Legal Threats
Anthropic, an AI firm, is standing its ground against the U.S. Department of Defense's pressure to relax its military AI restrictions, despite potential legal action. According to a report from Reuters, the company is resisting demands to alter its safety protocols that prevent the use of its technology in autonomous weaponry and domestic surveillance operations. During a recent meeting, Anthropic CEO Dario Amodei faced a stark ultimatum from U.S. Secretary of Defense Pete Hegseth: comply by the end of the week or face the consequences of the Defense Production Act. This law could compel Anthropic to comply or potentially label the company as a risk to the supply chain. Franklin Turner, an attorney specializing in government contracts with McCarter & English, noted that such action would be unprecedented and might lead to a surge of legal challenges. Amodei maintains that Anthropic's current safety measures do not impede military operations. Concurrently, the Pentagon is progressing with AI contracts with companies like Google, xAI, and OpenAI. These initiatives focus on deploying AI in military scenarios, such as autonomous drones, robotics, and cyber warfare. Notably, xAI, founded by Elon Musk, has already finalized an agreement with the Pentagon to operate on classified networks. This situation highlights the ongoing tension between tech firms and military demands as AI technology continues to evolve.

Canva Expands Portfolio with Acquisitions in Animation and AI-Powered Marketing
On Monday, Canva, the popular creative platform, revealed it has acquired two innovative startups: Cavalry, a UK-based company specializing in 2D motion animation, and Mango AI, which focuses on enhancing advertisement performance through advanced AI systems. This strategic move is set to enhance Canva’s offerings by integrating new technologies into its suite. Cavalry is renowned for its work in various fields including advertising, marketing, gaming, and generative art, offering robust 2D animation solutions. Canva plans to integrate Cavalry's technology into its Affinity suite, which the company acquired in 2024. Affinity, known for its comprehensive photo, vector, and layout editing capabilities, was redesigned last year and has since been downloaded over five million times after becoming free for all users. The acquisition of Cavalry will enable Canva to introduce motion editing features, bridging a significant gap in its creative suite. Canva stated in a blog post, "By integrating Cavalry with Affinity, we're enhancing our professional suite to include motion editing, forming a complete Creative OS for professionals that maintains the depth and control they require." In addition to Cavalry, Canva has also acquired Mango AI, a stealth startup developing reinforcement learning systems aimed at boosting video ad performance. Founded by Nirmal Govind, formerly of Netflix, and Vinith Misra, who has experience with Netflix and Roblox, Mango AI's innovative product helps clients optimize ad creation and assess campaign outcomes to inform future efforts. With this acquisition, Govind will assume the role of Canva's first "Chief Algorithms Officer," while Misra will focus on upgrading Canva's marketing products. This move follows Canva's acquisition of marketing intelligence startup Magicbrief in January 2025 and the launch of Canva Grow, a tool designed to aid in asset creation and performance assessment. During a recent discussion at the Web Summit in Qatar, Canva co-founder and COO Cliff Obrecht highlighted the success of Canva Grow, particularly in creating static content for Meta platforms. "It's an early-stage product with a devoted user base, including some major brands investing heavily," Obrecht noted. "We're gearing up to expand video creation and multi-platform deployment soon." These acquisitions underscore Canva's ambition to strengthen its standing as a comprehensive marketing solutions provider. The company ended 2025 with impressive figures, boasting $4 billion in annualized revenue, over 265 million users, and 31 million paying subscribers.

Tech Titans Prepare for Deepseek's Anticipated AI Launch Amid Controversy
Anticipation is building within the tech industry as Google, OpenAI, and Anthropic gear up for the imminent release from Chinese AI firm Deepseek. Scheduled for next week, this release is stirring discussions due to the model reportedly being trained on Nvidia's advanced Blackwell chips, despite existing US export restrictions. According to Reuters, which cited a senior official from the previous Trump administration, these chips were possibly acquired under clandestine circumstances, with rumors of smuggling circulating since the end of last year. The chips are said to be housed in a data center located in Inner Mongolia. There is speculation that Deepseek plans to eliminate any technical traces of US chip utilization before making the model public. Details on how the chips were obtained remain undisclosed, as Nvidia chose not to comment, and both Deepseek and the US Department of Commerce have yet to respond to inquiries from Reuters. This development comes amidst ongoing concerns from industry leaders like Google, OpenAI, and Anthropic, who have reported instances of distillation attacks on their AI models by Chinese companies. In response, OpenAI recently adjusted the interpretation of a well-known coding benchmark, suggesting a strategic shift in anticipation of Deepseek's capabilities. Deepseek's previous major release in January 2025 had a significant impact on US tech stocks, coinciding with a booming AI market. The current buzz suggests that the upcoming release may once again deliver impressive results at competitive prices, potentially shaking the industry anew.

OpenAI Enhances API for Improved Voice Command Accuracy and Faster Agent Performance
On February 24, 2026, OpenAI announced the release of significant API enhancements aimed at improving voice command reliability and boosting agent processing speeds for developers. The introduction of the new gpt-realtime-1.5 model is a key upgrade for the real-time API, promising more dependable voice command functionality. Internal tests by OpenAI revealed improvements, including a ten percent increase in accuracy for transcribing numbers and letters, a five percent enhancement in handling logical audio tasks, and a seven percent gain in following instructions effectively. Additionally, the audio model has been updated to version 1.5. Another major update involves the Responses API, which now includes support for WebSockets. This feature establishes a persistent connection, allowing only new data to be transmitted, rather than resending the entire context with each request. OpenAI claims this update enhances the speed of complex AI agents, especially those requiring numerous tool calls, by 20 to 40 percent. These upgrades mark a significant step forward in optimizing API performance for developers working with voice commands and AI agents.

ChatGPT and Gemini Voice Assistants Vulnerable to Misinformation
A recent investigation by Newsguard has revealed that ChatGPT and Gemini voice assistants are susceptible to disseminating false information. The study focused on the ability of these AI-driven systems—ChatGPT Voice by OpenAI, Gemini Live by Google, and Alexa+ by Amazon—to repeat misleading statements when presented with various types of prompts. The experiment involved 20 fabricated claims across topics such as health, U.S. politics, global news, and foreign disinformation. These claims were posed to the systems using neutral questions, leading questions, and deliberately misleading prompts aimed at generating a radio script containing the falsehoods. The results showed that ChatGPT echoed these inaccuracies 22% of the time, while Gemini did so 23% of the time. The use of malicious prompts dramatically increased these figures to 50% for ChatGPT and 45% for Gemini. In contrast, Amazon's Alexa+ demonstrated a robust resistance to such misinformation, maintaining a 0% fail rate across all prompt types. According to Amazon Vice President Leila Rouhi, this resilience is attributed to Alexa+'s reliance on trusted news sources like the Associated Press and Reuters. OpenAI did not provide a comment on these findings, and Google did not respond to requests for input. For those interested in the detailed methodology of the study, further information is available on Newsguard's website.

Secure Your Spot at TechCrunch Disrupt 2026: Only 6 Days Left for Best Pricing
The countdown is on for those eager to attend TechCrunch Disrupt 2026 at the most affordable rates. The Super Early Bird pricing ends on February 27 at 11:59 p.m. PT, giving you just six days to grab this opportunity. Whether you’re a tech enthusiast or an industry leader, now is the perfect time to register and save significantly, with individual passes offering up to $680 off and community passes available at a 30% discount. Scheduled for October 13-15 at San Francisco’s Moscone West, TechCrunch Disrupt will gather an impressive crowd of 10,000 participants, including founders, investors, operators, and innovators. Over three days, attendees will engage in activities centered around launching, scaling, and innovating in the tech world. What makes Disrupt a must-attend event? It provides unparalleled access to founders, venture capitalists, and operators who are at the forefront of their fields. Attendees can engage in discussions that may lead to funding opportunities, partnerships, or key hires, and gain tactical insights to apply immediately. Furthermore, it offers a glimpse into the future direction of technology. Over 300 startups will showcase tomorrow’s innovations, and the high-stakes Startup Battlefield 200 pitch competition will present a chance to see a standout company win a $100,000 equity-free prize. Curated networking opportunities will be available to drive meaningful connections and outcomes. Moreover, the event will feature talks from influential tech figures like WordPress co-founder Matt Mullenweg, General Motors CEO Mary Barra, and esteemed VC Vinod Khosla. Keep an eye on the event page for updates on the agenda. For maximum benefit, consider the Founder Pass, designed to accelerate your growth with the right insights and connections, or the Investor Pass, tailored to help you discover emerging startups and expand your investment portfolio. Remember, only six days remain to secure these passes at the lowest rates of the year. Make sure to register before the deadline on February 27, 11:59 p.m. PT. Additionally, TechCrunch is offering a separate opportunity to join the TechCrunch Founder Summit 2026 in Boston, MA, on June 9. This one-day event, bringing together over 1,000 founders and investors, focuses on growth, execution, and real-world scaling. Participants will learn from industry-shaping founders and investors and connect with peers facing similar growth challenges. The offer to save up to $300 or 30% ends on March 13. Don’t miss the chance to register now.

MPA Criticizes Bytedance's Seedance 2.0 for Encouraging Systematic Copyright Violations
The Motion Picture Association (MPA) has taken a firm stance against Bytedance, labeling its Seedance 2.0 as a tool designed for 'systematic infringement' of copyright laws. On February 22, 2026, the MPA issued a cease-and-desist letter to Bytedance, emphasizing that the AI video generator's capability to infringe on copyrights is not an unintended flaw but a fundamental aspect of its operation. The MPA contends that Bytedance has trained its AI model using studio content without acquiring proper permissions and has released the service without implementing necessary safeguards. This, the association argues, has resulted in the unauthorized reproduction and distribution of content that infringes on the copyrights of MPA member studios, including major players like Netflix, Warner Bros., Disney, Paramount, and Sony. These studios had already taken individual legal actions against Bytedance, reflecting a broader concern within the industry about generative AI companies' practices. Warner Bros. criticized Bytedance for following a typical strategy where companies initially leverage copyrighted material for marketing advantages, only to introduce protective measures under legal pressure—a tactic previously seen with OpenAI. Ongoing investigations have repeatedly identified instances where Seedance 2.0 has violated the rights of these studios. Speculation is rife that the mounting copyright complaints might delay the launch of the Seedance 2.0 API, originally scheduled for February 24, as Bytedance may be hastily working on implementing more robust safeguards. This controversy follows an earlier incident where Bytedance agreed to restrict Seedance after Disney threatened legal action over intellectual property violations. Disney accused Bytedance of creating a 'pirate library' featuring characters from franchises like Marvel and Star Wars. The company has since partnered exclusively with OpenAI. Viral videos using copyrighted characters have been widely shared on social media since the release of Seedance 2.0, prompting further legal threats from Paramount and the involvement of organizations like SAG-AFTRA. Japan has also launched an investigation into potential copyright infringements related to anime characters. In response, Bytedance has stated its commitment to respecting intellectual property rights and is reportedly working on enhancing its protective measures, though specific details have not been disclosed.

Google's Gemini 3.1 Pro Preview Dominates AI Index at a Fraction of Competitors' Costs
Google's Gemini 3.1 Pro Preview has secured the top position in the Artificial Analysis Intelligence Index, surpassing its competitors by a margin of four points while being significantly more cost-effective. This model excels in six out of ten evaluated categories, including agent-based coding, knowledge, scientific reasoning, and physics. Notably, it has achieved a substantial reduction in its hallucination rate, decreasing by 38 percentage points compared to its predecessor, Gemini 3 Pro, which had previously struggled in this area. The Artificial Analysis Intelligence Index consolidates ten benchmarks into a single overall score. Gemini 3.1 Pro Preview achieved a score of 57 points, outpacing Anthropic's Claude Opus 4.6 by four points and GPT-5.2 by six points. Testing the full index with Gemini incurs a cost of $892, in stark contrast to the $2,304 required for GPT-5.2 and $2,486 for Claude Opus 4.6. Gemini efficiently uses 57 million tokens, considerably fewer than the 130 million tokens needed by GPT-5.2. Meanwhile, open-source alternatives like GLM-5 are even more economical, costing $547. However, despite its impressive performance in benchmarks, Gemini 3.1 Pro falls short in real-world agent tasks compared to Claude Sonnet 4.6, Opus 4.6, and GPT-5.2. Additionally, in internal fact-checking trials, Gemini 3.1 Pro verified only about a quarter of statements, performing worse than Opus 4.6 and GPT-5.2, and even trailing behind the earlier Gemini 3 Pro version. While benchmarks provide valuable insights, they have their limitations, and individual assessments may vary.

Sam Altman Highlights Human Energy Consumption in AI Debate
During a recent event hosted by The Indian Express, Sam Altman, CEO of OpenAI, tackled the topic of AI's environmental footprint. Altman, who was in India for a prominent AI summit, dismissed worries about AI's water consumption as "totally fake." He acknowledged that water use was once a concern when evaporative cooling was employed in data centers, but assured that such practices are no longer in use. "Claims that using ChatGPT consumes 17 gallons of water per query are entirely baseless," Altman asserted. While Altman considers concerns about AI's water use unfounded, he believes it's reasonable to examine the total energy demand created by widespread AI adoption. He emphasized the necessity for a shift towards sustainable energy sources such as nuclear, wind, and solar power. Although tech firms are not legally obliged to report their energy and water consumption, independent scientists have taken the initiative to study these impacts. Rising electricity costs have been linked to the operation of data centers. In response to a question referencing a conversation with Bill Gates, Altman refuted claims that a single ChatGPT query uses energy equivalent to 1.5 iPhone battery charges, dismissing the comparison as inaccurate. He criticized discussions about ChatGPT’s energy use as "unfair," particularly those comparing the energy needed to train AI models with what humans use for single inference queries. Altman humorously pointed out that training a human requires significant energy, spanning over 20 years of life and all the sustenance consumed during that period. He highlighted the extensive evolutionary history of humanity, suggesting that AI, when measured in terms of energy efficiency for providing answers, might already be on par with humans. To explore more of Altman's insights, including his views on energy and water use, the full interview is available, starting at roughly 26:35.

Apple's AI Under Fire for Unintentional Bias Across Millions of Devices
A recent investigation has unveiled significant biases within Apple’s AI-driven feature, impacting millions of users across iPhones, iPads, and Macs. Conducted by the non-profit AI Forensics, the study scrutinized over 10,000 summaries produced by Apple Intelligence, revealing systematic biases in handling identity-related content. The analysis highlighted that the AI often omits the ethnicity of white individuals more frequently than other ethnic groups and tends to reinforce gender stereotypes in ambiguous contexts. This AI feature, which automatically summarizes notifications, texts, and emails, could potentially be classified as a systemic risk model under the EU AI Act. Despite this, Apple has not yet committed to the voluntary Code of Practice. Using Apple’s developer framework, AI Forensics accessed the system, which operates with approximately three billion parameters. In tests involving 200 fictional news stories, researchers discovered significant discrepancies: the ethnicity of white characters was noted in only 53% of summaries, compared to 64% for Black, 86% for Hispanic, and 89% for Asian characters. Gender biases were equally concerning. In an analysis of 200 BBC headlines, women’s first names appeared in 80% of summaries, while men's were included only 69% of the time, often replaced with surnames—a choice linked to higher perceived status. The AI’s handling of ambiguous texts was particularly troubling. In 77% of scenarios featuring ambiguous pronouns, the system assigned them to a specific gender, often aligning with stereotypes—such as associating 'she' with nurses and 'he' with surgeons. The AI also fabricated social biases in 15% of cases, tagging Syrian students with terrorism, pregnant applicants as unfit for work, and attributing incompetence to those of short stature—all unsupported by the original text. In comparison, Google's Gemma3-1B model, with a third of the parameters, showed significantly less bias, hallucinating only 6% of the time and conforming to stereotypes in a smaller proportion of cases. These biases are part of a broader issue with large language models, which can reflect societal prejudices. Unlike other platforms, Apple Intelligence operates without user prompts, inserting itself directly into communications. Previously, Apple faced criticism for generating false news summaries linked to reputable sources like the BBC and New York Times, leading to the suspension of summaries for news apps. However, personal and professional messaging remains affected by AI biases. This comes amid broader challenges for Apple’s AI initiatives, as the company struggles to meet promised upgrades for Siri and explores partnerships with Google’s Gemini to enhance its AI capabilities. The findings by AI Forensics underscore the urgent need for Apple to address these embedded biases in its AI systems.

Indian Startup Sarvam Unveils Indus AI Chat App Amid Growing Competition
Sarvam, an emerging Indian AI company dedicated to developing language models tailored for local languages and users, has introduced its Indus chat application for both web and mobile platforms. This launch positions Sarvam in an increasingly competitive market space, largely dominated by international giants such as OpenAI, Anthropic, and Google. As India emerges as a key player in the adoption of generative AI technologies, this move is particularly significant. OpenAI's CEO, Sam Altman, recently highlighted that ChatGPT boasts over 100 million weekly active users in India. Additionally, Anthropic reported that India contributes to 5.8% of its total usage for Claude, trailing only behind the United States. Indus functions as a chat interface for Sarvam's newly introduced Sarvam 105B model, a massive language model comprising 105 billion parameters. This announcement comes just two days after Sarvam revealed its 105B and 30B models at the India AI Impact Summit held in New Delhi. At the summit, the company also shared its strategic plans for enterprise collaborations, hardware development, and partnerships with companies like HMD to integrate AI into Nokia feature phones, and Bosch for automotive AI applications. Currently in its beta phase, the Indus app is accessible on iOS, Android, and web platforms. Users can engage with the app by typing or speaking their queries and receiving answers in both text and audio formats. Sign-in is facilitated through phone numbers, along with Google, Microsoft, or Apple accounts, although the service is presently restricted to the Indian market. However, there are a few limitations to the app. Users are unable to delete their chat history independently; it requires account deletion. Moreover, there's no option to disable the app's reasoning feature, which can occasionally slow down response times. Sarvam has also cautioned that access might be limited as they work on expanding their compute capacity. "We’re gradually rolling out Indus on a limited compute capacity, so you may hit a waitlist at first. We will expand access over time," stated Pratyush Kumar, Sarvam's co-founder, on X. The company is actively seeking user feedback to enhance the app's performance. Established in 2023, Sarvam has successfully raised $41 million from investors such as Lightspeed Venture Partners, Peak XV Partners, and Khosla Ventures, focusing on creating large-scale language models specifically for the Indian context. Sarvam is part of a growing cohort of Indian startups that aim to develop homegrown alternatives to global AI platforms, striving for increased autonomy over India's AI ecosystem.

Internal Debate at OpenAI: Should Canadian Police Have Been Alerted About Concerning ChatGPT Activity Prior to School Shooting?
Months before a tragic school shooting in Tumbler Ridge, British Columbia, OpenAI employees were involved in intense discussions about whether to inform Canadian authorities about alarming activity on ChatGPT. According to a report by the Wall Street Journal, roughly a dozen staff members deliberated over whether to notify the police about a user who had been repeatedly describing scenarios of gun violence. These concerning interactions were flagged by OpenAI’s automated systems and reviewed by staff, but the decision was made not to report them to law enforcement. A company spokesperson explained that the user’s messages did not meet the necessary criteria for a 'credible and imminent risk of serious physical harm,' leading to the choice of merely blocking the account instead. Before Jesse Van Rootselaar became the main suspect in the school shooting incident, she had engaged in troubling conversations with ChatGPT in June 2025. OpenAI's models are designed to discourage discussions promoting real-world violence. When users express harmful intentions, these are flagged for human review, and law enforcement can be contacted if there is a significant threat. Despite internal concerns about the potential for real-world violence, the decision was made not to involve Canadian police at that time. OpenAI later reached out to the Royal Canadian Mounted Police (RCMP) after the attack occurred and is currently cooperating with the ongoing investigation. Van Rootselaar's digital activities extended beyond ChatGPT. On the gaming platform Roblox, she reportedly participated in simulations of mass shootings and engaged in discussions about gun-related YouTube videos. On February 10, she was found deceased at the scene of the shooting, apparently due to a self-inflicted wound, after killing eight individuals and injuring at least 25 more. The RCMP identified the 18-year-old as the shooter. The incident highlights the challenging balance AI companies face between respecting user privacy and ensuring public safety. OpenAI's decision-making process in this case underscores the complexities involved in assessing and acting upon digital warning signs of potential violence.

Anthropic's AI Security Innovation Shakes Up Cybersecurity Stocks
Anthropic has unveiled Claude Code Security, a groundbreaking AI tool capable of identifying security vulnerabilities in software code that often elude traditional scanners. This new tool employs an advanced method, mimicking the analytical approach of a human security researcher by understanding code interactions and data flow within applications. As news of this innovation spread, it triggered a significant sell-off in cybersecurity stocks, with major players like CrowdStrike, Cloudflare, Okta, and SailPoint experiencing notable declines of over 8%. Claude Code Security stands out by moving beyond conventional pattern-matching techniques. While existing tools can identify obvious issues such as exposed passwords, they often miss intricate problems like business logic flaws. Anthropic's new tool addresses this gap by providing a sophisticated analysis akin to human reasoning. It reviews code, assesses vulnerabilities, and offers suggested fixes, though final implementation still requires human oversight. Initial access to Claude Code Security is limited to Enterprise and Team customers, with open-source project maintainers offered free and expedited entry. The tool has been tested through various rigorous scenarios, including capture-the-flag competitions and partnerships aimed at safeguarding critical infrastructure. Remarkably, it has already uncovered over 500 vulnerabilities in production code, highlighting its potential impact. The broader implications of this tool are significant. Anthropic anticipates that AI-driven scanning will soon cover a large portion of global codebases, significantly enhancing the detection of long-hidden bugs. However, the rise of AI in cybersecurity also means that attackers could exploit these tools to find vulnerabilities more rapidly. The market's reaction reflects broader concerns about the impact of AI on the software industry. Investors fear that new AI capabilities might enable users to create bespoke applications, potentially diminishing the demand for established software solutions. Nevertheless, it's unlikely that companies will forego proven software products entirely. Instead, AI is expected to drive down production costs, allowing for the emergence of niche applications while established products continue to integrate AI enhancements. While AI may reduce development expenses, operational costs such as maintenance, compliance, and integration remain substantial. The market's focus on reduced production costs may overlook these ongoing expenses, which are crucial to the long-term viability of software solutions.

G42 and Cerebras Collaborate to Launch 8 Exaflops Supercomputer in India
Abu Dhabi's technology firm G42 has teamed up with American semiconductor company Cerebras to establish a groundbreaking supercomputer system in India, delivering a formidable 8 exaflops of computing power. Announced during the India AI Impact Summit in New Delhi, this initiative will comply with India's stringent data residency, security, and compliance standards. The supercomputer is set to serve as a backbone for AI-driven projects across educational institutions, government bodies, and small to mid-sized businesses. Manu Jain, CEO of G42 India, emphasized the strategic importance of this development, stating, "Sovereign AI infrastructure is crucial for maintaining a competitive edge on a national level. This project empowers India with the necessary infrastructure, allowing researchers, innovators, and enterprises to embrace AI while ensuring complete control and security over their data." The collaboration also includes Abu Dhabi’s Mohamed bin Zayed University of Artificial Intelligence (MBZUAI) and India's Centre for Development of Advanced Computing (C-DAC). A notable achievement from the previous year was the release of the Nanda 87B language model by MBZUAI and G42, which supports Hindi-English translations based on Meta’s Llama 3.1 70B model. Andy Hock, Cerebras' Chief Strategy Officer, remarked on the importance of this deployment for India's AI landscape, highlighting its potential to significantly boost the nation's computational prowess and advance AI sovereignty. The system is expected to accelerate the development of large-scale AI models, tailored specifically to meet India's unique requirements. The India AI Impact Summit also saw major commitments from Indian and international companies to bolster the country's AI infrastructure. For example, Indian giant Adani announced a $100 billion investment to develop up to 5 gigawatts of data-center capacity by 2035, while Reliance pledged $110 billion over seven years for gigawatt-scale data centers. In a related development, OpenAI has partnered with Tata Group to secure 100 megawatts of AI computing capacity in India, with plans to expand it to 1 gigawatt through its Stargate project. India's technology minister, Ashwini Vaishnaw, revealed the government's intention to attract over $200 billion in infrastructure investment within two years by leveraging tax incentives, state-backed venture capital, and supportive policies. So far, U.S. tech giants, including Amazon, Google, and Microsoft, have already committed approximately $70 billion to enhance AI and cloud infrastructure in India.

Microsoft's AI Media Authentication Study Reveals Technology Shortcomings Amid New Legislative Demands
Microsoft has released a critical report evaluating the effectiveness of technologies designed to authenticate media in the age of AI-generated content. The study highlights significant limitations in current methods, even as new laws assume these technologies can reliably distinguish between real and synthetic media. The report, titled "Media Integrity and Authentication: Status, Directions, and Futures," is part of Microsoft's LASER program, spearheaded by Chief Scientist Eric Horvitz. It brings together experts from various fields, including AI, cybersecurity, and policy, to assess three main technologies: provenance metadata, invisible watermarks, and digital fingerprints. Each approach has been found to have considerable vulnerabilities. Provenance metadata, which uses cryptographic signatures to verify a file's origin and edits, can be easily removed, while invisible watermarks, which encode information imperceptibly, are fallible and prone to errors. Digital fingerprints, which match content against a database, face issues like hash collisions and high storage demands. Microsoft underscores that validated provenance data merely indicates unchanged content since signing, not the truthfulness of the content itself. In testing 60 combinations of these technologies, only 20 achieved "high-confidence authentication." This necessitates either a confirmed C2PA manifest or a watermark that directs to such a manifest. Microsoft's recommendations urge displaying only high-confidence results publicly, with less certain indicators reserved for forensic analysis to avoid public confusion. The report also delves into "reversal attacks," where authenticity signals are manipulated, making genuine content seem fake and vice versa. Microsoft advises platforms to show detailed provenance information, including the extent of any edits, to mitigate such attacks. The report further identifies local devices as weak links in media authentication, advocating for secure cloud environments for media creation and signing. Smartphones provide better security than traditional computers, but cameras vary, with newer models supporting secure standards while basic ones do not. The study also comments on AI-based deepfake detection tools, which while helpful, are not foolproof and are engaged in constant competition with evolving adversarial tactics. The legislative landscape is also examined, noting that laws in places like California and the EU demand permanent, hard-to-remove AI disclosures, which current technology struggles to meet. Microsoft cautions that rushing inadequate systems could erode public trust. Although the report serves as a self-regulatory guide to bolster Microsoft's image as a reliable entity, it remains uncertain if the company will implement its own recommendations. The firm's AI ecosystem, including its collaboration with OpenAI, positions Microsoft at the forefront of addressing AI-driven challenges. Hany Farid from UC Berkeley, not involved in the study, believes that widespread adoption of Microsoft's framework could significantly reduce deceptive content, although not entirely resolve the problem.

Nvidia Plans $30 Billion Investment in OpenAI Amid Massive Funding Round
Nvidia is reportedly on the brink of investing $30 billion in OpenAI, as reported by Reuters, citing a source with insight into the situation. This substantial investment is part of a larger funding round where OpenAI seeks to amass over $100 billion, potentially valuing the company at approximately $830 billion. Such a valuation would establish it as one of the most significant private fundraising efforts ever recorded. Alongside Nvidia, other notable participants in this funding round include SoftBank and Amazon. OpenAI intends to allocate a major portion of the newly acquired funds to purchase Nvidia chips, which are essential for the training and deployment of its artificial intelligence models. According to the Financial Times, this investment supersedes a previous agreement announced in September. That deal involved Nvidia committing up to $100 billion to facilitate OpenAI's chip usage for data centers. However, the finalization of that agreement took longer than anticipated, leading to the current revised terms.

David Silver Secures $1 Billion Seed Funding to Pioneer Superintelligence Through Reinforcement Learning
Esteemed AI researcher David Silver, formerly of Google DeepMind, has successfully raised a staggering $1 billion in seed funding for his new enterprise, Ineffable Intelligence. This London-based venture aims to redefine the development of superintelligent systems by eschewing traditional large language models (LLMs) in favor of reinforcement learning methodologies. According to the Financial Times, the funding round, spearheaded by Sequoia Capital, places the startup's valuation at an impressive $4 billion, marking it as the most substantial seed financing for a European startup to date. Ineffable Intelligence, under Silver's guidance, will focus on AI systems capable of autonomous learning through reinforcement learning. This method allows AI to evolve by interacting with their environment, a process akin to human and animal adaptation. Silver, a pivotal figure in the creation of groundbreaking AI systems like AlphaGo, AlphaZero, and MuZero at DeepMind, has chosen to continue advancing AI without relying on vast text data sets, which he views as inherently limiting. The vision for Ineffable Intelligence is to develop AI that can independently acquire knowledge, moving beyond the constraints of human-taught information. Silver, alongside computer scientist Richard Sutton, has theorized in their 'Era of Experience' paper that future AI systems will surpass human capabilities by learning predominantly from experiential data. At the core of this approach are 'world models,' which allow AI to simulate and predict the outcomes of their actions, thereby continuously refining their understanding and capabilities over time. Silver's efforts reflect a growing trend among AI experts who are questioning the sufficiency of Transformer architectures in achieving true superintelligence. Notable figures such as Ilya Sutskever and Jerry Tworek are also exploring alternative methodologies, emphasizing the need for AI to learn from real-time data. DeepMind CEO Demis Hassabis has also acknowledged the potential of world models in AI's future, despite recognizing existing technical and financial challenges. As Silver embarks on this ambitious journey, he has received support from former colleagues, including Demis Hassabis, who praised Silver's new venture. With Ineffable Intelligence, David Silver is poised to transform the landscape of AI development, pushing the boundaries of what intelligent systems can achieve.

Meta Allocates $65 Million to Support AI-Advocating Politicians in State Elections
Meta is channeling $65 million to sway state elections throughout the United States, according to a report by the New York Times. This marks the largest political funding effort by the company to date, aimed at endorsing candidates who are supportive of artificial intelligence initiatives. To execute this strategy, Meta has established four Super Political Action Committees (PACs). Two of these are newly created: "Forge the Future Project," which focuses on backing Republican candidates, and "Making Our Tomorrow," which targets Democratic candidates. These new entities join two existing Super PACs to maximize Meta's influence. The initiative kicks off this week in Texas and Illinois. In Texas, where Meta is in the process of constructing three AI data centers, the funds are directed toward strengthening Republican candidates' campaigns. Meanwhile, in Illinois, the financial support is distributed among at least four races for legislative seats. This strategic investment seems to stem from Meta's apprehensions regarding varying state-level artificial intelligence regulations. Given the relatively low cost of influencing state elections, the $65 million investment could significantly impact the outcomes, potentially shaping regulatory landscapes in favor of AI advancements.

AI Benchmarks Reveal Significant Vulnerability Exploitation in Smart Contracts
In a groundbreaking development, OpenAI and crypto investment firm Paradigm have introduced EVMbench, a new benchmark designed to evaluate the proficiency of AI agents in identifying, fixing, and exploiting security vulnerabilities within Ethereum smart contracts. This dataset encompasses 120 vulnerabilities, sourced from 40 real-world security audits, providing a comprehensive test for AI capabilities. The most advanced test scenario requires AI agents to autonomously interact with a local blockchain to execute attacks. Among the AI models tested, GPT-5.3-Codex demonstrated impressive results, successfully exploiting 72% of the vulnerabilities and resolving 41.5% of them. In terms of vulnerability detection, Claude Opus 4.6 led the pack with a 45.6% success rate. According to the researchers, the primary challenge for AI agents lies in detecting vulnerabilities within extensive codebases. When agents received hints regarding the locations of vulnerabilities, the success rates for exploitation surged from 63% to 96%, while fix rates increased from 39% to 94%. Considering the massive $100 billion locked in smart contracts, the study highlights both an opportunity to enhance security and a potential threat if such AI capabilities are misused. These findings underscore the critical importance of safeguarding smart contract systems against advanced AI exploitation.

Nvidia Secures Major Deal with Meta and Expands into CPU Market Amid Rising Competition
Nvidia has secured a significant multi-year agreement with Meta, marking a pivotal moment as the company ventures further into the CPU market to tackle growing competition. This deal, encompassing millions of chips, includes not only Nvidia’s Blackwell and Rubin GPUs but also introduces standalone Grace and Vera CPUs for the first time. Analysts estimate the deal to be worth billions of dollars, signifying a strategic shift for Nvidia as it begins selling CPUs separately, targeting the burgeoning inference market. While GPUs are essential for training and inference of large AI models, CPUs offer a more cost-effective and energy-efficient solution for numerous smaller inference tasks. Meta’s decision to rely on Nvidia hardware contrasts with other hyperscalers like Amazon and Google, who develop their own processors. Reports suggest Meta's internal chip development has faced technical hurdles and delays. Under this agreement, Meta will purchase millions of Nvidia chips, including the current Blackwell GPUs, forthcoming Rubin GPUs, and standalone Grace and Vera CPUs. The financial details of the deal remain undisclosed, but experts, like Ben Bajarin from Creative Strategies, estimate its value in the tens of billions. This partnership aligns with Meta CEO Mark Zuckerberg's plans to nearly double the company’s AI infrastructure investment to potentially $135 billion by 2026. The noteworthy aspect of this deal is Meta’s decision to implement Nvidia's CPUs as standalone products at scale, departing from Nvidia’s previous strategy of bundling Grace processors with GPUs in "Superchips." In January 2026, Nvidia officially began offering CPUs independently, with neocloud provider CoreWeave being the first customer. The AI industry is witnessing a shift from a GPU-heavy focus on training large models to an emphasis on inference, where trained models are executed. For these tasks, GPUs can be excessive. Bajarin highlighted this transition from a "training" era to an "inference era," which demands a new approach. Ian Buck, Nvidia’s VP and General Manager of Hyperscale and HPC, noted that the Grace processor offers "2x the performance per watt" for backend tasks like database management. Meta has reportedly tested the Vera CPU, showing promising results. The Grace CPU utilizes 72 Arm Neoverse V2 cores with LPDDR5x memory, enhancing bandwidth and compactness. Meanwhile, the Vera CPU, featuring 88 custom Arm cores with multi-threading and confidential computing capabilities, is planned for deployment in 2027 for use in WhatsApp's encrypted messaging service. Nvidia’s decision to market CPUs individually puts the company in direct competition with server market giants like Intel and AMD. Meta’s choice to purchase standalone Nvidia CPUs sets it apart from other major companies such as Amazon with its Graviton processors and Google with Axion, even as Meta develops its own AI chips, albeit facing technical challenges. Nvidia is also dealing with intensified competition as Google, Amazon, and Microsoft introduce new proprietary chips. OpenAI, in collaboration with Broadcom, has partnered with AMD, and startups like Cerebras offer specialized inference chips, challenging Nvidia’s market dominance. To strengthen its position, Nvidia recently acquired talent from inference chip company Groq. Last year, Nvidia’s stock saw a slight dip following rumors of Meta negotiating with Google to use its Tensor Processing Units, though no agreement has been confirmed. Despite these developments, Meta continues to operate AMD Instinct GPUs and is involved in the design of AMD’s upcoming Helios rack systems.

Perplexity Eliminates Ads, Emphasizes Commitment to Accuracy
Perplexity, a pioneering AI firm, has decided to remove advertising from its search engine, emphasizing its dedication to maintaining user trust. The company, which initially experimented with ads in 2024, started phasing them out by the end of last year. A spokesperson from Perplexity mentioned in an interview with the Financial Times that the presence of ads led users to doubt the credibility of the information provided by the platform. Positioning itself as a business focused on accuracy, Perplexity is shifting its revenue strategy towards subscription models, offering plans that vary from $20 to $200 monthly. Currently, the platform boasts over 100 million users and holds a market valuation of $18 billion. This strategic move sets Perplexity apart from major industry players like OpenAI and Google, both of which continue to incorporate ads into their AI services. In contrast, Anthropic, another competitor, has also pledged to keep its chatbot Claude free from advertisements, even highlighting this commitment during a Super Bowl advertisement. Perplexity's recent announcement can also be seen as a strategic marketing maneuver, similar to previous bold moves such as its offers to acquire TikTok and Chrome. The decision emphasizes Perplexity's ongoing effort to distinguish its brand as one focused on delivering precise and reliable information. Source: Financial Times

Apple's Smart Glasses Progressing Faster Than Anticipated, Set for 2026 Production
Apple is advancing rapidly in the development of its smart glasses, aiming for production to commence by late 2026, as reported by Bloomberg. This ambitious move includes three wearable AI devices: smart glasses, a pendant, and AirPods equipped with a camera. These technologies have been the subject of speculation, but new insights reveal significant progress. The smart glasses, internally known as N50, are reportedly more advanced than initially thought. Apple has begun distributing broader prototypes within the company and is in the process of designing custom frames. The production is projected to start in December 2026. The glasses are expected to have dual cameras—one dedicated to capturing high-resolution images and another for computer vision, akin to the Vision Pro. Additionally, Apple is developing a pendant device. This pendant, comparable in size to an AirTag, can be worn with a clip or chain and boasts processing capabilities similar to those of AirPods. The company is still deliberating over the inclusion of a speaker. Meanwhile, the camera-integrated AirPods could potentially be launched as soon as this year. However, the pendant is not expected to hit the market before 2027. All three devices are centered around Siri and designed to complement the iPhone. The Vision Pro team has now taken on the task of developing both the smart glasses and the pendant, marking a significant step in Apple's wearable technology strategy.

Manus Debuts 'Agents' Feature on Telegram, Bypassing Meta's WhatsApp
Manus has introduced its innovative 'Agents' mode on Telegram, marking the first platform to support this new AI agent feature. Despite Meta's acquisition of Manus in late 2025, the company has chosen Telegram for the initial launch, with other platforms expected to follow. This new mode allows users to perform intricate tasks directly within chat, utilizing a simple QR code connection and accessible to all users regardless of their subscription level. The 'Agents' feature on Telegram encompasses the full range of capabilities found in the web version, including multi-step processes, research, data handling, and document production. Users can also send voice messages, images, and files to the agent, selecting between two models: Manus 1.6 Max for more demanding tasks and Manus 1.6 Lite for swift inquiries. Importantly, Manus assures users that the agent does not have access to other Telegram conversations. The decision to prioritize Telegram over WhatsApp, despite Meta's ownership, is intriguing. The acquisition deal is currently under evaluation by Chinese regulatory bodies, which might influence this strategic choice. Alternatively, Meta could be aiming to pilot this feature on a platform not directly associated with its brand, mitigating potential risks as agent technologies remain sensitive, particularly in terms of cybersecurity—a vulnerability highlighted by recent issues with AI software like OpenClawd. For more information, users are encouraged to visit the Manus website.

Potters Bar: A Small Town Caught in the Global AI Infrastructure Surge
Just a short trip from London lies Potters Bar, a town separated from the village of South Mimms by 85 acres of farmland, with hedgerows tracing through the landscape. In one of these fields, a solitary oak tree has become a focal point for local protest. A sign affixed to its trunk declares, “NO TO DATA CENTRE.” The catalyst for this uprising came in September 2024, when a developer sought permission to construct one of Europe’s largest data centers on this pastoral land. As news spread, a Facebook group emerged, rallying over 1,000 locals to oppose the project. Despite their efforts, the local government approved the planning in January 2025. By October, Equinix, a multinational data center operator, acquired the land with plans to commence construction this year. On a dreary January afternoon, I met with Ros Naylor, an admin of the protest group, and a handful of local residents near the site. They voiced their concerns, focusing on the loss of green space—a valued escape to nature and a shield against the encroaching urban sprawl. “Walking through this area is essential for mental health and wellbeing,” Naylor emphasized. As the UK government strives to meet the increasing demand for data centers crucial for AI development, such facilities are sprouting nationwide. Yet, for those living nearby, the promise of economic growth or enhanced smartphone capabilities offers little solace for the disruption to their rural lifestyle. In recent years, the UK government has redefined certain green belt lands as 'grey belt,' allowing construction on underperforming areas. This change, coupled with the classification of data centers as critical infrastructure, has paved the way for numerous new developments. Global AI labs are poised to invest trillions in infrastructure to advance their models, but these projects are often met with resistance. When Potters Bar’s data center was approved, authorities deemed the farmland grey belt, influenced by governmental support for the data center industry. They argued that economic benefits outweighed the loss of green space. Jeremy Newmark, leader of Hertsmere Borough Council, noted that the land was a low-performing green belt patch, challenging the romanticized view of lush countryside. The protest group questions this reclassification, especially when a nearby housing proposal was rejected to preserve green belt land. “How can one field be dispensable and another invaluable?” asked Eamonn Lynch, a local resident. They feel overwhelmed by the planning process, claiming their objections were overshadowed despite significant local opposition. Efforts to overturn the decision through official complaints and appeals have so far been unsuccessful. Newmark insists the consultation process was thorough, and each application is judged independently. In response to protests, the local government underscores the economic benefits of the data center, anticipating over $5 billion in investment, 2,500 construction jobs, and 200 permanent positions. The facility is expected to generate $27 million annually in property taxes, benefiting local services. Newmark argues that such an investment will significantly impact the local economy, potentially attracting more high-tech businesses. For Equinix, this site is strategically located near major population hubs and existing facilities, offering low latency and strong power infrastructure. Andrew Higgins of Equinix assures that half the site will remain green, enhancing biodiversity with planned ponds, wetlands, and meadows. He hopes to balance development with environmental responsibility. Equinix must still complete a final planning phase before construction can begin, facing a determined protest group intent on challenging every step. Michael Batty, a planning expert, highlights the importance of public objection in Britain’s planning system. In January, after the group dispersed, I walked the farmland with Janet Longley, a long-time resident. As her dog, Lola, bounded through the muddy fields, she reflected on the need for data centers and the digital services they enable. Despite this understanding, Longley laments the loss of a beloved green space and wishes the project could be relocated. “Beauty is in the eye of the beholder,” she mused, gesturing across the landscape. “It is actually beautiful. Just maybe not so much today.”

Cohere Unveils Open Multilingual Models at India AI Summit
Cohere, a leader in enterprise AI, has introduced a new series of multilingual models known as Tiny Aya during the India AI Summit. These models, designed with open-weight architecture, allow public access and modification of their code, supporting over 70 languages. Remarkably, they can function on standard devices such as laptops without requiring an internet connection. Developed by Cohere Labs, the models cater to a variety of South Asian languages including Bengali, Hindi, Punjabi, Urdu, Gujarati, Tamil, Telugu, and Marathi. At their core, the models boast 3.35 billion parameters, highlighting their complexity and capacity. Additionally, Cohere has released TinyAya-Global, a refined version tailored to better understand user instructions, ideal for applications demanding extensive language support. The Tiny Aya model family includes specific regional versions: TinyAya-Earth for African languages, TinyAya-Fire for South Asian languages, and TinyAya-Water for languages across Asia Pacific, West Asia, and Europe. Cohere emphasizes that this tailored approach enhances the linguistic and cultural relevance of each model, making them more intuitive and dependable for their intended communities. Despite these specializations, all Tiny Aya models maintain wide-ranging multilingual capabilities, providing versatile foundations for further innovation and research. These models were developed on a single cluster of 64 H100 GPUs from Nvidia, demonstrating their efficiency with moderate computing resources. Designed for ease of use on devices, they enable developers to implement offline translation and other applications seamlessly. Cohere's software is optimized for on-device performance, requiring less computational power than many similar models. In countries with rich linguistic diversity, like India, the ability to operate offline can significantly expand the potential applications and use cases, eliminating the need for constant internet connectivity. The models are accessible on platforms like HuggingFace, Kaggle, and Ollama, offering opportunities for local deployment. Cohere is also sharing training and evaluation datasets on HuggingFace and plans to publish a comprehensive report on their training processes. The company has ambitious plans for the future, as noted by CEO Aidan Gomez, who indicated intentions to go public soon. Cohere concluded 2025 with a strong financial performance, achieving $240 million in annual recurring revenue and 50% growth quarter-over-quarter, according to CNBC.

Blackstone Invests Up to $1.2 Billion in Indian AI Startup Neysa to Boost Domestic AI Infrastructure
U.S. private equity giant Blackstone has announced its support for Indian AI infrastructure company Neysa, as the startup enhances its domestic computing capabilities in response to India’s growing focus on developing its own AI infrastructure. Blackstone, along with co-investors like Teachers’ Venture Growth, TVS Capital, 360 ONE Asset, and Nexus Venture Partners, plans to inject up to $600 million in primary equity into Neysa. This investment positions Blackstone as the majority stakeholder in the Mumbai-based company. Additionally, Neysa is seeking to raise another $600 million through debt financing to expand its GPU capacity, significantly up from the $50 million it previously raised. This investment comes at a time when global demand for AI computing resources is skyrocketing, leading to shortages in specialized chips and data center space necessary for training and operating large AI models. Emerging companies like Neysa are stepping into this gap, offering specialized GPU capacity and faster deployment services than traditional cloud providers. These 'neo-clouds' are particularly appealing to enterprises and AI labs that require specific regulatory compliance, latency considerations, or custom solutions. Neysa distinguishes itself by delivering tailored, GPU-centric infrastructure for businesses, government entities, and AI developers within India, where local computing needs are still developing but rapidly growing. Neysa co-founder and CEO Sharad Sanghi highlighted the company's commitment to providing personalized support that many large cloud providers do not offer, such as 24/7 assistance and quick response times. Ganesh Mani, a senior managing director at Blackstone Private Equity, noted that India currently has fewer than 60,000 GPUs deployed, a number expected to expand to over two million in the coming years. This growth is fueled by government initiatives, the needs of regulated industries like finance and healthcare, and AI developers who are increasingly building models locally. Global AI labs are also looking to move their computing resources closer to Indian users to enhance service efficiency and meet data regulations. Blackstone's investment strategy has seen similar ventures in data center and AI infrastructure worldwide, including backing for platforms like QTS, AirTrunk, CoreWeave, and Firmus. Neysa's current operations involve about 1,200 active GPUs, with plans to scale to over 20,000 as demand rises. Sanghi mentioned that discussions for expanding capacity are well underway, and deployments could accelerate within the next nine months. The new funding will primarily focus on deploying large-scale GPU clusters, covering compute, networking, and storage facilities. A portion will also support research and development and the enhancement of Neysa's software platforms for orchestration, observability, and security. Neysa aims to more than triple its revenue in the coming year, driven by the increasing demand for AI solutions, with plans to extend its reach beyond India eventually. Established in 2023, Neysa employs 110 staff across its offices in Mumbai, Bengaluru, and Chennai.

Indian Startup C2i Secures Investment to Tackle AI Data Center Power Challenges
As AI data centers increasingly face power constraints, Indian startup C2i Semiconductors is stepping up with innovative solutions. Recognizing the crucial role of energy efficiency, Peak XV Partners has invested in C2i, which is developing system-level power solutions to minimize energy loss and enhance AI infrastructure economics. The startup recently secured $15 million in a Series A funding round led by Peak XV Partners, alongside Yali Deeptech and TDK Ventures, bringing its total funding to $19 million. With the global demand for data center energy skyrocketing, data centers are projected to nearly triple their electricity consumption by 2035, according to a December 2025 BloombergNEF report. Furthermore, Goldman Sachs Research anticipates a 175% increase in power demand by 2030 compared to 2023, equivalent to adding another top-10 power-consuming nation. The key challenge lies in efficiently converting high-voltage power within data centers, a process currently resulting in 15% to 20% energy wastage, as explained by C2i's co-founder and CTO, Preetam Tadeparthy. C2i, founded in 2024 by a team of former Texas Instruments power executives, including Ram Anant, Vikram Gakhar, Preetam Tadeparthy, and Dattatreya Suryanarayana, is revolutionizing power delivery with a plug-and-play 'grid-to-GPU' system. This approach integrates power conversion, control, and packaging, potentially reducing end-to-end energy losses by about 10%, which translates to significant savings in cooling costs and improved GPU utilization. For Peak XV Partners, the appeal of C2i lies in the potential to significantly lower energy costs, a critical factor in the ongoing expense of data centers after initial investments in servers and facilities. Rajan Anandan, managing director of Peak XV Partners, highlighted the substantial financial impact of reducing energy costs by even 10% to 30%. C2i is poised for rapid validation, with its first silicon designs expected to return between April and June. The startup plans to test its solutions with data center operators and hyperscalers who have expressed interest. With a team of around 65 engineers and customer operations expanding to the U.S. and Taiwan, C2i is gearing up for early deployments. While the power delivery domain is dominated by established players, C2i's comprehensive approach, which involves coordinating silicon, packaging, and system architecture, sets it apart. This ambitious strategy requires significant capital and time to prove effective in real-world environments. Anandan emphasized the importance of execution, noting that all startups face risks related to technology, market, and team dynamics. The next six months will be crucial for C2i, as silicon validation and customer feedback will test its innovative solutions. India's semiconductor design ecosystem has matured, with a growing pool of engineering talent and government incentives reducing the cost and risk of developing competitive semiconductor products. This evolution positions Indian startups like C2i to potentially make a significant impact on the global stage. As C2i begins validating its solutions, the coming months will reveal whether these conditions can lead to a globally competitive product.

OpenClaw Innovator Peter Steinberger Joins OpenAI to Advance AI Agents
Peter Steinberger, renowned for his work on the open-source project OpenClaw, has taken a significant step by joining OpenAI. His mission at OpenAI will center around the development of cutting-edge personal AI agents. Sam Altman, CEO of OpenAI, praised Steinberger as a "genius" with visionary ideas about the future of intelligent agents collaborating effectively to perform valuable tasks for users. Altman anticipates that this initiative will rapidly become integral to OpenAI's suite of products. OpenClaw, which began as Steinberger's hobby project and recently gained widespread attention, will continue as an open-source initiative under the stewardship of a foundation. OpenAI will support this endeavor, with Altman noting that the future looks "extremely multi-agent." In a recent blog post, Steinberger shared that after considering various leading AI laboratories in San Francisco, he opted to join OpenAI due to their aligned vision. His objective is to create an AI agent so intuitive that even his mother could use it. Achieving this, according to Steinberger, will necessitate foundational changes, increased security research, and access to cutting-edge models. Steinberger expressed his ambition to make a global impact rather than build a large company, viewing a partnership with OpenAI as the quickest route to achieve widespread adoption. His commitment to changing the world underscores his dedication to leveraging AI for broad societal benefit.

Concerns Rise Over Safety and Direction at Elon Musk's xAI
Elon Musk is reportedly pushing for a more 'unhinged' version of the Grok chatbot at his AI firm, xAI, according to insights from a former employee shared with The Verge. Recent announcements have revealed that SpaceX, another of Musk's ventures, is set to acquire xAI, which had previously taken over his social media company, X. Amid these changes, at least 11 engineers and two co-founders have announced their departure from the company. Some of these exits are attributed to the desire to pursue new ventures, while Musk has indicated that this reshuffling could help streamline xAI's operations. However, two former employees, one of whom left before the current wave of departures, have expressed concern over xAI’s apparent neglect for safety protocols. This issue has garnered international attention, particularly following the creation of over one million sexualized images by Grok, including controversial deepfakes involving real women and minors. One insider remarked that 'Safety is a dead org at xAI,' while another mentioned Musk's preference for a less constrained model, equating safety measures to a form of censorship. Moreover, these sources highlighted a perceived lack of clear direction within the company, with one stating that xAI seems to be 'stuck in the catch-up phase' when compared to its industry competitors.

Bytedance's Seed2.0 Intensifies Competition with Affordable AI Solutions
Bytedance has unveiled its latest AI model series, Seed2.0, significantly ramping up the competitive pressure on Western AI models by offering similar capabilities at substantially lower costs. This new release includes three different model sizes—Pro, Lite, and Mini—as well as a specialized model for coding tasks. One of the standout features of the Seed2.0 series is its enhanced multimodal processing ability, which has been refined to better comprehend documents, tables, graphics, and videos. In performance tests, the Seed2.0 Pro model has excelled, achieving leading scores in various benchmarks related to visual math, logic, and perception. It outperformed well-known Western models such as GPT-5.2, Claude Opus 4.5, and Gemini 3 Pro in several key areas. The Seed2.0 Pro's achievements extend beyond these benchmarks to international competitions. It attained gold medal-level scores in the International Mathematical Olympiad (IMO), the Chinese Mathematical Olympiad (CMO), and the International Collegiate Programming Contest (ICPC). Impressively, it claimed gold in all five ICPC contests it participated in, surpassing both Gemini 3 Pro and GPT-5.2. However, like its peers, it stumbled on problem 6 at the IMO. Despite its successes, the Seed2.0 Pro does have some limitations. It trails behind Claude in code generation and falls short of Gemini in areas requiring extensive knowledge. Additionally, it is less effective than its Western competitors at minimizing hallucinations, according to Bytedance's evaluations. Pricing for Seed2.0 models significantly undercuts that of Western counterparts, with the Seed2.0 Pro API priced at approximately $0.47 per million input tokens and $2.37 per million output tokens. In contrast, Anthropic's Claude Opus 4.5 starts at $5.00 per million input tokens and $25.00 per million output tokens. The cost for other models like GPT-5.2 and Gemini 3 Pro is also notably higher. The Seed2.0 Pro model is accessible through Doubao, a popular Chinese chat application, while the dedicated code model is available via the TRAE developer tool. All variants of Seed2.0 operate on Bytedance's Volcano Engine cloud platform. Currently, detailed technical information is provided exclusively in Chinese, but additional insights can be found in the model card. This strategic pricing and performance combination by Bytedance positions Seed2.0 as a formidable contender in the AI landscape, offering competitive alternatives to more expensive Western models.

Reporter Tests AI Gig Economy, Ends Up Empty-Handed After Two Days
Reece Rogers, a journalist for WIRED, embarked on an unusual experiment by offering his services to AI agents through the RentAHuman platform. This platform connects AI with humans for real-world tasks, typically at modest rates. Despite an offered $5 per hour, Rogers found himself without any engagement from the AI side, prompting him to proactively seek work. One opportunity promised $10 for listening to a podcast and tweeting about it, but Rogers received no further contact. Another AI, named Adi, proposed $110 for delivering flowers and marketing materials to the AI startup Anthropic. However, Adi's relentless follow-ups—ten messages within a day and additional emails—left Rogers feeling overwhelmed. On his third attempt, Rogers accepted a job to distribute flyers for 50 cents each. He took a cab to the designated pickup spot, only for the meeting location to shift mid-journey. Upon arrival at the new location, he learned the flyers were not yet available and was advised to return later in the day. After two days of effort, Rogers had not earned anything, with each task aligning suspiciously with promotional activities for AI startups. Rogers's experience highlights the challenges and frustrations of the emerging AI gig economy.

Google Deepmind's Bioacoustic Model Excels in Whale Detection Using Bird Call Training
In a surprising demonstration of machine learning's potential, Google Deepmind has unveiled a bioacoustic model that excels in identifying whale songs despite being primarily trained on bird calls. This breakthrough underscores the power of generalization in artificial intelligence, showing how models can transfer skills across vastly different domains. The research highlights that the model, known as Perch 2.0, outperformed specialized whale-detection models, including Google's own Multispecies Whale Model. This success is attributed to the fine-grained distinctions required for bird call classification, which seem to translate effectively to identifying marine mammal sounds due to similar evolutionary sound production mechanisms shared between birds and marine mammals. The Perch 2.0 model, consisting of 101.8 million parameters, was trained on over 1.5 million recordings from a diverse range of animal species, predominantly birds. Despite the scarcity of aquatic recordings in its training dataset, the model's performance in marine sound classification tasks is remarkable. It was tested against three datasets, including NOAA PIPAN, ReefSet, and DCLDE 2026, demonstrating exceptional accuracy in these challenging environments. When tasked with distinguishing orca sounds from different subpopulations, Perch 2.0 achieved an AUC-ROC score of 0.945, significantly outpacing the whale-specific model's score of 0.821. The general-purpose model's ability to classify marine sounds with minimal training data is a testament to its robust design. The research team identified several reasons for this unexpected cross-domain success. Firstly, larger models tend to generalize better, benefiting from neural scaling laws. Secondly, bird classification demands the detection of minimal acoustic differences, honing the model's ability to discern subtle variations. Lastly, the shared sound production methods between birds and marine mammals enhance the model's ability to transfer knowledge across species. This advancement holds practical implications for marine bioacoustics, where new sounds are frequently discovered. The model's ability to quickly train classifiers for newly identified sounds could revolutionize the field, allowing for rapid analysis and classification of marine noises. Google's initiative includes providing open access to the tools and resources necessary for using this model, with tutorials available on Google Colab and code hosted on GitHub. This makes it easier for researchers and enthusiasts to leverage the model's capabilities for further exploration in bioacoustics.

Airbnb Integrates AI to Enhance User Experience and Operations
Airbnb is gearing up to introduce AI-driven enhancements to its app, aiming to revolutionize how users search for accommodations, plan their trips, and assist hosts in property management. CEO Brian Chesky announced on Friday that the company plans to leverage large language models to improve customer discovery, support, and engineering. During the fourth-quarter earnings call, Chesky emphasized the goal of creating an AI-native experience that goes beyond simple searches. "Our app will understand and assist users throughout their entire journey, enabling guests to plan comprehensive trips and helping hosts manage their businesses more effectively," he explained. Airbnb is also testing a new feature that allows users to search and ask questions about listings and destinations using natural language. Currently, the platform offers a language model-powered customer service bot for some level of personalization and communication. The newly anticipated AI search feature aims to develop into a more comprehensive and intuitive tool that enhances the travel experience from start to finish. When asked about the potential for sponsored property slots within AI search results, Chesky stated that the company is focused on perfecting the design and user experience before moving forward. "The AI search is currently live for a limited audience. We are conducting numerous experiments to make it more conversational and integrated into the overall travel experience. Eventually, we may consider incorporating sponsored listings," Chesky added, noting the possibility of designing ad units that align with the conversational search model. Airbnb is set to benefit from the expertise of its new CTO, Ahmad Al-Dahle, formerly of Meta's Llama project, to harness its extensive data on user identities and reviews. The company's AI-powered customer support bot, launched in North America last year, now resolves a third of customer issues autonomously. Chesky highlighted plans to enable customers to interact with the AI bot via phone and expand its language capabilities across all regions where Airbnb operates. "In a year, we aim to have over 30% of support requests handled by AI in all the languages we support, transitioning from chat to voice interactions," he said. Internally, Airbnb is also increasing its reliance on AI. While 80% of its engineers currently use AI tools, the goal is to reach full adoption. The company reported a revenue of $2.78 billion for the fourth quarter, marking a 12% increase compared to the previous year.

Airbnb Leverages AI to Handle a Third of Customer Support in North America
Airbnb has announced that its AI-driven customer support system is now managing about one-third of support inquiries in the United States and Canada. The company is planning to expand this technology globally. Airbnb aims for AI to handle over 30% of customer support queries worldwide, in all languages where they currently have human support. CEO Brian Chesky, during the company's recent earnings call, expressed confidence that this move will not only cut costs but also significantly enhance service quality. Chesky implied that AI could outperform humans in resolving certain issues. The company highlighted its strategic hiring of CTO Ahmad Al-Dahle, an AI specialist formerly with Meta, to spearhead this initiative. Under Al-Dahle's leadership, Airbnb plans to develop an enhanced AI-driven app tailored to user preferences. Chesky explained that this app would assist guests in planning their trips, aid hosts in managing their properties, and improve operational efficiency. Chesky praised Al-Dahle's expertise, citing his extensive experience at Apple and Meta, where he led the development of generative AI models. This expertise is seen as crucial for transforming the Airbnb experience. Airbnb's leadership emphasized that its vast proprietary data sets it apart from other AI solutions. Chesky noted that a generic AI chatbot lacks access to Airbnb's 200 million verified identities and 500 million exclusive reviews. Additionally, he mentioned that a large percentage of guests communicate directly with hosts, a feature unique to Airbnb. Airbnb predicts revenue growth in the "low double digits" this year, following a strong fourth quarter that exceeded expectations. Chesky dismissed concerns about AI platforms entering the short-term rental market, asserting that Airbnb's comprehensive services, including insurance and user verification, provide a strong competitive edge. He further noted that AI chatbots function similarly to search engines, driving high-conversion traffic. Airbnb is already integrating AI into its search capabilities, experimenting with conversational search features and planning to add sponsored listings. In contrast to Spotify, which reported reduced coding by its developers due to AI, Airbnb revealed that 80% of its engineers currently utilize AI tools, with plans to increase this to 100%. This highlights Airbnb’s commitment to integrating AI across its operations.

Pinterest Ranks Among Top Search Platforms, Surpassing ChatGPT in Search Volume, Amidst Earnings Miss
Pinterest CEO Bill Ready recently highlighted the platform's impressive search capabilities, surpassing even the popular AI tool ChatGPT. Despite underwhelming fourth-quarter financial results, Ready emphasized Pinterest's strength as a search destination. He cited third-party data showing Pinterest handles 80 billion searches monthly, ahead of ChatGPT’s 75 billion, alongside generating 1.7 billion monthly clicks. Ready noted that over half of Pinterest's searches are commercial, significantly more than ChatGPT's estimated 2%. The disappointing financial report saw Pinterest miss projections, with $1.32 billion in revenue against an expected $1.33 billion, and earnings per share at 67 cents instead of the anticipated 69 cents. Looking ahead, the company projects first-quarter 2026 revenues between $951 million and $971 million, falling short of the $980 million forecasted. The company attributed these shortfalls to reduced spending by major advertisers, particularly in Europe, and challenges from a new furniture tariff introduced in October, which impacted the home category. This financial strain comes despite a faster-than-expected growth in Pinterest's user base, which rose 12% year-over-year to 619 million, surpassing Wall Street’s forecast of 613 million. However, these positive user metrics did not prevent a 20% drop in Pinterest’s stock during after-hours trading. Facing the evolving market landscape, particularly the shift towards AI-driven platforms, Pinterest is focusing on its visual search and personalization tools to enhance user experience and drive commercial searches. Ready pointed out that their partnership with Amazon has streamlined the checkout process, which could support their commercial strategy. Ready expressed optimism about the platform's ability to guide users through shopping experiences without needing typed prompts. He also addressed the potential future where AI could facilitate purchases directly, indicating Pinterest is prepared to adapt when consumers embrace such technology.

IBM to Expand Entry-Level Hiring Amidst AI Advancements
Contrary to the trend of reducing entry-level positions due to artificial intelligence advancements, IBM is taking a different approach by significantly increasing its hiring efforts. The tech company plans to triple its entry-level recruitment in the United States by 2026, as reported by Bloomberg. This development was announced by IBM's Chief Human Resource Officer, Nickle LaMoreaux, during Charter's Leading with AI Summit. LaMoreaux emphasized that these roles are precisely those that are often believed to be replaceable by AI. However, IBM is reshaping these entry-level positions to focus on human-centric tasks rather than areas that AI can easily automate, such as coding. This shift aims to enhance skills in customer engagement and interaction, which are vital for future career progression within the company. IBM's strategy highlights the importance of nurturing new talent to prepare them for advanced roles as they grow within the company, even if the immediate demand for entry-level talent has decreased. While IBM has not disclosed the exact number of positions it plans to fill, the move underscores the company's commitment to evolving its workforce in the face of AI-driven changes. The potential impact of AI on the job market is a subject of significant interest. An MIT study projected that by 2025, AI could automate approximately 11.7% of jobs. Meanwhile, insights from a TechCrunch survey indicate that many investors believe 2026 will reveal the true influence of AI on employment, even though labor was not the primary focus of the survey. In related news, the TechCrunch Founder Summit 2026 is set to take place in Boston on June 23, where over 1,100 founders will gather to discuss growth strategies, execution, and scaling in the real world. The event provides an opportunity to learn from industry leaders and network with peers, offering actionable insights for immediate application. Attendees can benefit from discounted passes and group ticket offers.

Elon Musk's New Ambition: A Moonbase Alpha for SpaceX and xAI
In a bold move, Elon Musk is inviting talent to join xAI with a unique proposition: the vision of constructing mass drivers on the Moon. This came on the heels of a significant reshuffle at the AI lab, leading to the departure of several executives. After the merger of xAI with SpaceX and the upcoming IPO, Musk's recruitment strategy is shifting from the usual goals of achieving Artificial General Intelligence or disrupting software industries, to a more celestial aim. Following the unveiling of plans to establish AI data centers in space, Musk's vision has expanded. He posed the question, “What if you want to achieve more than a terawatt per year?” His solution? Head to the moon. Musk envisions a future where a lunar city manufactures space computers, launching them into the depths of the solar system via a massive maglev train. These ideas were shared during an all-hands meeting, with a presentation slide showcasing the moon base concept. This ambitious plan comes at a time when SpaceX is pivoting away from its long-standing mission to colonize Mars. Now, with xAI integrated into the fold, Musk's vision draws from the Kardashev Scale—a theoretical framework for assessing a civilization’s technological advancement through its energy consumption. Musk suggests that a lunar base could harness a significant portion of the sun's energy to power AI models, leading to unprecedented advancements in intelligence. Musk’s vision is not just about making headlines; it’s about creating a tangible and compelling narrative for his companies. While previous goals focused on Mars exploration, the moon base aligns with Musk’s evolving focus on artificial intelligence, symbolized by past initiatives like the “Occupy Mars” campaign. However, SpaceX's Mars ambitions faced hurdles, notably the lack of financial backing for such a colossal endeavor. Plans to adapt SpaceX’s Dragon spacecraft for Mars landings were shelved due to prohibitive costs. The Starship, originally intended for Mars colonization, has been repurposed for more profitable ventures, such as deploying satellites for the Starlink network and fulfilling NASA contracts to land on the moon. The idea of building satellites on the moon necessitates a dramatic decrease in space travel costs and innovations in space-based manufacturing, which are still on the horizon. Yet, if retail investors rally behind Musk’s vision, SpaceX could witness a surge akin to Tesla’s market success. For engineers in both AI and aerospace, Musk’s new direction may seem unexpected, but it offers a fresh perspective on xAI’s mission beyond developing language models. Departing executives have remarked on the monotony of current AI lab projects, suggesting that Musk’s lunar supercomputer vision, while ambitious, is anything but dull.

OpenAI Allegedly Employs Tailored ChatGPT to Detect Internal Information Leaks via Slack and Email Monitoring
According to a report by The Information, OpenAI is utilizing a customized version of ChatGPT to identify potential internal leakers by analyzing communications within Slack and email. This specialized version is reportedly employed by OpenAI's security team when internal information becomes public. The team inputs the leaked content into ChatGPT, which is equipped with access to internal documents and communications. The system then attempts to trace the source of the leak by pinpointing documents or communication threads that contain the leaked information and identifying who had access to them. At this time, it remains unclear whether this method has successfully identified any leakers. Specific details about what distinguishes this version of ChatGPT remain undisclosed. However, there is a hint that OpenAI engineers have developed an AI agent capable of complex data analysis using natural language, which could potentially fulfill this role. This agent is designed to tap into institutional knowledge stored in various platforms, including Slack and Google Docs. The architecture of this AI agent was recently showcased by OpenAI engineers, suggesting its potential application in internal security operations.

xAI Unveils Bold Interplanetary Vision During Public Meeting
In an unexpected move, xAI released a full 45-minute all-hands meeting video on Wednesday via their X account, granting public access to the session. This decision followed a New York Times report on the Tuesday evening meeting, potentially prompting xAI to share the video. The footage reveals crucial insights into Elon Musk's ambitious plans for the AI lab, outlining the product roadmap and ongoing collaborations with the X platform. Despite being only 30 months old, xAI's compact and skilled team has achieved impressive milestones, making the future look incredibly promising. During the meeting, Musk addressed the recent wave of employee departures, attributing them to organizational changes that necessitated layoffs. This restructuring has notably affected a considerable segment of the founding team, leading to some uncertainty. "As xAI rapidly expands, its structure must adapt," Musk explained on X. "Regrettably, this meant parting ways with some team members, whom we wish the best in their future ventures." The reorganization divides xAI into four main teams: one dedicated to the Grok chatbot (including voice integration), another focused on the app's coding system, a third working on the Imagine video generation tool, and a final team handling the Macrohard project, which ranges from basic computer operations to simulating entire corporations. "[Macrohard] can do anything a computer can," stated Toby Pohlen, who will lead the project under the new structure. "AI should be capable of designing entire rocket engines." The meeting also highlighted new usage and revenue statistics for both xAI and X. Nikita Bier, X's product head, announced that X has reached over $1 billion in annual recurring revenue from subscriptions, credited to a holiday marketing surge. Additionally, xAI's Imagine tool reportedly produces 50 million videos daily and over 6 billion images monthly, based on internal data. However, these figures are clouded by a surge in AI-generated explicit content on X, with around 1.8 million sexualized images created within nine days, suggesting a significant portion of these metrics may include controversial material. The presentation's most striking moment came as Musk underscored the significance of space-based data centers despite their technical challenges. He went further to envision a lunar factory for AI satellites, complete with a mass driver – an electromagnetic catapult – to launch them. Such infrastructure could enable an AI cluster to harness substantial solar energy or even expand into other galaxies. "It's hard to fathom what an intelligence of that scale would contemplate," Musk remarked, "but witnessing it unfold will be incredibly thrilling."

Modal Labs Eyes $2.5B Valuation in New Funding Talks, Insiders Reveal
Modal Labs, a pioneering startup in AI inference infrastructure, is reportedly in discussions with venture capital firms regarding a fresh funding round, aiming for a valuation around $2.5 billion. This information comes from four individuals familiar with the ongoing negotiations. If successful, this new funding round would significantly bolster Modal's valuation, which stood at $1.1 billion just a few months ago following an $87 million Series B funding. Sources indicate that General Catalyst is poised to lead this round. Modal Labs is experiencing a strong revenue trajectory, with an annualized revenue run rate close to $50 million, according to insiders. However, the talks are still in preliminary stages, and the terms are subject to change. Erik Bernhardsson, Modal Labs' co-founder and CEO, has denied that the company is actively seeking funds, describing his engagements with venture capitalists as routine discussions. General Catalyst has yet to respond to queries for comments. The startup's focus is on refining AI inference, which involves executing trained AI models to respond efficiently to user queries. By enhancing inference, Modal Labs aims to lower computational costs and minimize response delays for users. The company is part of a niche group of inference-specialized startups garnering significant investor interest. Recently, competitor Baseten secured $300 million, reaching a $5 billion valuation, and Fireworks AI attracted $250 million at a $4 billion valuation. In a similar vein, the team behind vLLM, an open-source inference initiative, has formed a startup named Inferact, raising $150 million at an $800 million valuation led by Andreessen Horowitz. Additionally, the creators of SGLang have launched RadixArk, which reportedly raised seed funding at a $400 million valuation led by Accel. Founded in 2021 by Erik Bernhardsson, who previously held senior roles at companies like Spotify and Better.com, Modal Labs has quickly attracted attention in the tech world. Its early investors include Lux Capital and Redpoint Ventures.

Anthropic Expands Claude Cowork AI Assistant to Windows Platform
Anthropic has extended its AI assistant software, Claude Cowork, to the Windows operating system after its initial debut on macOS. Windows users now have access to the same comprehensive features that were available on macOS, including file access, the ability to perform multi-step tasks, various plugins, and MCP connectors that facilitate the integration of external services. Additionally, users can configure global and folder-specific instructions, which Claude adheres to in every session. Currently, Cowork for Windows is in a Research Preview phase, allowing early testing and feedback from users. This feature is accessible to all paying subscribers through Claude's official website at claude.com/cowork. However, users should exercise caution when granting the system access to their files, especially if they contain sensitive or private information, due to potential cybersecurity risks. Generative AI systems can be vulnerable to adversarial prompts, such as prompt injections, among other security threats. In fact, Cowork experienced such an issue shortly after its initial launch.

Elon Musk's AI Startup xAI Sees Half its Founders Depart
February 11, 2026: Elon Musk's artificial intelligence venture, xAI, has witnessed the departure of another key co-founder, Jimmy Ba. Ba, one of the twelve initial founding members, has left the company. Prior to joining xAI, he was an assistant professor at the University of Toronto and studied under AI luminary Geoffrey Hinton. In his farewell message, Ba emphasized that xAI's mission is to push humanity forward on the Kardashev technology scale. He anticipates that AI systems capable of self-improvement could become operational within the next year. Despite these advancements, Ba expressed a desire to "recalibrate my gradient on the big picture." With Ba's exit, the tally of departing co-founders from xAI has reached six, accounting for half of the original team. Earlier departures include notable figures such as Igor Babuschkin, formerly associated with DeepMind and OpenAI, Yuhuai (Tony) Wu from Google and Stanford, Kyle Kosier from OpenAI, Greg Yang from Microsoft Research, and Christian Szegedy from Google. On February 10, 2026, it was announced that Tony Wu, another co-founder, had also decided to leave. Wu played a crucial role in developing xAI's foundational models and reasoning capabilities, reporting directly to Musk. He joined the company from Google when it was initially established in 2023. Wu's departure coincides with recent controversies involving xAI, such as the temporary allowance of deepfake nude photo creation, which was retracted following regulatory intervention. However, Wu's farewell note suggests that his decision to leave was not influenced by these incidents. He expressed gratitude to Elon Musk for his support, reflecting on their collaborative efforts. Babuschkin's exit in August 2025 led to the creation of his own AI safety fund, following controversies over the xAI chatbot Grok, which faced criticism for making far-right statements. The timing of these departures is particularly striking as SpaceX recently announced a takeover of xAI. This acquisition values SpaceX at one trillion dollars and xAI at 250 billion dollars. Despite significant development costs for its models, xAI has struggled to generate substantial revenue, leaving its future trajectory uncertain.

AI Giants Collaborate on European Startup Accelerator Initiative
Leading Western AI research labs have paused their competitive rivalry to collaborate on a new accelerator program aimed at European startups innovating with their AI models. This initiative, known as F/ai, is managed by Paris-based incubator Station F. On Tuesday, Station F revealed its collaboration with major tech players such as Meta, Microsoft, Google, Anthropic, OpenAI, and Mistral, marking a significant first in joint participation by these firms in a single accelerator program. Additional partners include cloud and semiconductor giants like AWS, AMD, Qualcomm, and OVH Cloud. An accelerator program is essentially an intensive training course for early-stage startups. It provides an environment where founders can attend classes, consult with experts, and connect with potential investors and customers. The primary goal is to expedite the process of bringing innovative ideas to market. Each F/ai cohort will consist of 20 startups, all focused on helping European AI enterprises achieve revenue generation early on, thereby facilitating the acquisition of necessary funding to venture into larger global markets. Roxanne Varza, director at Station F, highlighted the program's emphasis on rapid market commercialization in a conversation with WIRED. "Investors are starting to feel like, ‘European companies are nice, but they’re not hitting the $1 million revenue mark fast enough,’” Varza stated. The accelerator, scheduled to run for three months twice annually, launched its inaugural edition on January 13. Although Station F has not disclosed the participating startups, many were recommended by prominent venture capital firms like Sequoia Capital and General Catalyst. These startups are developing AI applications utilizing foundational models from the partner labs, covering diverse fields such as agentic AI, procurement, and finance. Rather than direct funding, the participants will receive over $1 million in credits, exchangeable for access to AI models, computational resources, and other partner services. Historically, European companies have struggled to match their American and Chinese counterparts in AI development and commercialization. To bridge this gap, governments in the UK and EU are investing heavily in homegrown AI firms and infrastructure necessary for AI advancement. In the US, accelerators like Y Combinator have successfully nurtured well-known companies such as Airbnb and Stripe. OpenAI itself was launched in 2015 with support from Y Combinator’s research arm. Station F aspires for F/ai to replicate this success in Europe, elevating local AI startups to compete globally. "It’s for European founders with a global ambition," Varza added. This program also offers US-based AI labs an opportunity to deepen their influence in Europe by incentivizing startups to adopt their technologies early on. Once a startup begins developing on a particular AI model, transitioning to another model can be complex, according to Marta Vinaixa, partner and CEO at Ryde Ventures. "When you build on top of these systems, you’re also building for how the systems behave—their quirkiness," she explained. "The earlier you start with a foundation, the more challenging it becomes to switch."