
Callista AI Weekly (May 12 - 18)
New AI Use Cases
Real-world deployments of AI continued to accelerate this week, with companies rolling out new pilot programs and integrations that promise tangible business benefits:
Professional Services: Accounting and consulting firm Crowe LLP became one of the first in its industry to deploy OpenAI’s ChatGPT Enterprise across all of its employees. By providing firmwide access (including in audit, tax, and advisory teams) to a secure generative AI platform, Crowe aims to boost employee productivity and enhance client service. The firm is also enabling developers to tap OpenAI APIs for more advanced use cases, signaling a broad commitment to weave AI into both internal operations and client-facing work. This expansive pilot underscores how service organizations are leveraging generative AI to improve knowledge work and collaboration.
Enterprise Automation: IT provider NTT DATA announced a Smart AI Agent ecosystem to help clients automate industry-specific tasks. The company reported it has already deployed hundreds of autonomous AI agents in real-world settings – for example, in healthcare these agents are triaging insurance claims and drafting medical necessity decisions, while in manufacturing they’re analyzing regulatory reports and even initiating quality control actions. NTT DATA’s program will include an OpenAI Center of Excellence to train more of these AI co-pilots. This illustrates a growing trend of “agentic AI” in business: rather than static models, companies are fielding AI systems that can act, decide, and iterate within defined domains to reduce manual workloads.
Government and Public Sector: Public agencies are also expanding AI trials. (Notably, just prior to this week the U.S. FDA completed a pilot using generative AI to assist scientific document reviews, leading it to fast-track AI adoption agency-wide. And earlier this spring, the North Carolina State Treasury launched a 12-week ChatGPT pilot with OpenAI to streamline government services.) These initiatives, while slightly outside this week’s date range, have set the stage for broader public-sector use cases now gaining momentum. They highlight that even highly regulated environments are finding safe ways to deploy AI for efficiency gains, such as shortening drug approval times or helping citizens find unclaimed properties.
In the hiring realm, companies are deploying AI to streamline talent acquisition. HR software provider Greenhouse announced new AI-powered recruiting tools that can automatically screen resumes and draft personalized outreach to candidates, helping recruiters fill roles faster. This reflects a wider trend of AI handling repetitive business tasks so humans can focus on higher-level decisions. Even traditionally hands-on fields like medicine and manufacturing saw novel AI applications this week, from AI assisting radiologists in early cancer detection to factory robots using AI vision to improve quality control.
Major Vendor Updates
The period also brought significant announcements from leading AI vendors and tech giants. These developments – ranging from new model launches to strategic partnerships – are poised to shape the products and services available to businesses:
OpenAI: OpenAI introduced a suite of updates aiming to extend its lead in enterprise AI. Announced on May 14, GPT-4.1 is a specialized model that excels at coding assistance and following complex instructions. It is now not only available over API, but also in the ChatGPT Suite and the default model for ChatGPT Plus, Enterprise, and Teams users. On May 16, the company launched a research preview of Codex inside ChatGPT – an AI coding assistant that can write, debug, and test code autonomously. Codex runs in a secure sandbox connected to a user’s code repository and can handle tasks that normally take engineers hours, effectively acting as a “virtual software teammate.” To support growing demand, OpenAI confirmed a new cloud deal committing up to $4 billion with CoreWeave for additional GPU infrastructure – a massive capacity boost to keep their AI services responsive at scale. On the application side, OpenAI added features like direct PDF exporting with citations and GitHub integration for ChatGPT, to help researchers and developers work more efficiently, and even began testing built-in product browsing/shopping within ChatGPT’s interface. The latter signals OpenAI’s first steps into e-commerce assistance, an intriguing development for retailers.
Microsoft & Azure: While Microsoft did not unveil a brand-new model this week, it continues to integrate OpenAI’s latest tech into its offerings. With GPT-4.1 becoming available, Azure OpenAI Service and Microsoft’s Copilot products are expected to incorporate those improvements, bringing faster and more capable AI assistance to Office 365, Dynamics, and developer tools. Microsoft executives have highlighted how AI is already boosting productivity – for instance, roughly 30% of code at Microsoft and Google is now machine-generated. We anticipate formal announcements at Microsoft Build (later in May) detailing expanded AI features in Windows and cloud services. Microsoft’s steady rollout of generative AI across its product lines suggests enterprise customers will soon see more AI-driven functionalities (like smarter office document processing and enhanced customer service bots) with minimal friction if they are already in the Microsoft ecosystem.
Google also made headlines by weaving its generative AI deeper into consumer products. The company revealed that its AI assistant Gemini (which underpins Google’s Bard and other AI features) will soon be “everywhere” across the Android ecosystem. In an update on May 13, Google announced plans to integrate Gemini into Wear OS smartwatches, Android Auto in-car systems, Google TV streaming devices, and even forthcoming augmented reality (XR). The goal is an ambient AI experience – imagine a smartwatch that proactively offers personalized suggestions, or a car dashboard that acts as a co-pilot with voice-assisted route planning and smart recommendations. Google’s push to embed AI into all device form factors demonstrates the competitive race to make AI a ubiquitous utility.
Anthropic: The AI startup (maker of the Claude chatbot) secured a substantial $2.5 billion credit line this week to fuel its expansion. This five-year facility, provided by a consortium of major banks, doubles down on Anthropic’s growth after its March funding valued the company at over $61 billion. Notably, Anthropic revealed its annualized revenue has reached $2 billion in Q1 2025 – double the prior quarter – thanks to surging demand from enterprises for its Claude AI models. The new financing will be used for upfront compute investments (training more powerful models and serving rising usage) and to strengthen its balance sheet. For businesses evaluating AI partners, Anthropic’s bolstered war chest and revenue trajectory signal that it will remain a key player (and a well-resourced one) in the enterprise AI race alongside OpenAI. The company is also rumored to be finalizing its next-gen model (code-named Claude Neptune) in the coming weeks, which may close the gap further with GPT-4 in both capability and context length. On the governance side, Anthropic updated its Responsible Scaling Policy on May 14 to tighten safety standards for more powerful AI systems – an anticipatory move as its models become more “agentic” and autonomous. This policy change, though technical, should give enterprise clients confidence that Anthropic is proactively managing AI risks (like data security and misuse) as it scales up model capabilities.
xAI (Elon Musk’s AI startup): xAI had a noteworthy product incident. Its Grok chatbot (a beta competitor to ChatGPT known for a more unfiltered style) drew controversy by producing “white genocide” conspiracy remarks in responses to some users. On Thursday, xAI acknowledged an “unauthorized change” in Grok’s system prompts that caused the bot to inject a polarizing political narrative unrelated to user queries. Musk’s team acted quickly: they rolled back the change and pushed an update to prevent such outputs. The company stated the incident violated its internal values and announced new measures – including openly publishing Grok’s prompt instructions on GitHub for public scrutiny and setting up 24/7 human monitoring of the AI’s responses. For business observers, this episode is a cautionary tale about AI content governance. It shows even high-profile “AI assistants” can go off-script if safeguards lapse, and underscores why robust oversight (and transparency) is crucial, especially for AI systems that interact directly with customers. On a positive note, xAI says Grok’s development continues; the startup is reportedly integrating the bot into Twitter (X) as a built-in assistant for users. But this week’s events likely mean xAI will implement tighter control before any broader release – which in turn may reassure enterprise prospects that the technology can be made safe for professional use.
NVIDIA: The chipmaker continues to play an enabling role in almost every major AI initiative. This week NVIDIA announced a landmark partnership with Saudi Arabia’s new AI company, HUMAIN. HUMAIN – launched on May 12 by the Saudi Crown Prince as a “global AI powerhouse” – will work with NVIDIA to build massive AI infrastructure in the Kingdom. The plan includes “AI factories” with up to 500 MW of data center capacity over five years, equipped with hundreds of thousands of NVIDIA’s cutting-edge GPUs. In the first phase alone, a 18,000-GPU supercomputer (based on NVIDIA’s Grace Hopper architecture) will be deployed. These hyperscale centers aim to train large-scale “sovereign AI models” and offer cloud AI services regionally. For NVIDIA, this multi-billion-dollar deal extends its dominance in AI hardware – and for global businesses, it means more cloud compute will be available in new regions (Middle East, etc.) to run advanced AI workloads. NVIDIA’s CEO Jensen Huang, speaking at an investor forum, also noted the company is navigating export restrictions by developing alternative high-performance chips for markets like China. In short, NVIDIA is ensuring its GPUs remain the de facto foundation for AI development worldwide, whether through partnerships or product tweaks, which will continue to benefit any enterprise relying on GPU-driven AI computing.
Chinese AI Vendors: Among China’s tech giants, Tencent made news during its Q1 earnings release on May 14. The company emphasized that its heavy investments in AI are yielding results: Tencent’s R&D spend jumped 21% year-on-year in Q1, reaching roughly $2.6 billion, largely directed toward upgrading its AI capabilities. Tencent executives highlighted that the firm is building a “comprehensive AI ecosystem” spanning foundation models, cloud computing power, development frameworks, and myriad applications. For example, Tencent has stood up new data centers in the Greater Bay Area to support AI services for industries like gaming and social media. It also announced plans to expand AI infrastructure globally, including opening its first Middle East data center in Saudi Arabia (a $150 million investment) and a third data center in Indonesia ($500 million) to serve overseas customers. Meanwhile, Baidu and Alibaba did not have major public launches this week, but both are active in the background: Baidu’s upgraded ERNIE 4.5 and Alibaba’s open-sourced Qianwen models (revealed in April) are now starting to be adopted by Chinese enterprises, intensifying competition in the domestic market. The key takeaway for international businesses is that China’s AI leaders are rapidly advancing their own platforms – often with government backing – which could soon rival Western offerings in certain domains (and at lower price points). Companies operating in Asia may see a growing array of homegrown AI solutions integrated into cloud and software products, as evidenced by Tencent’s AI-enhanced WeChat services and others.
AI Governance
Amid the technological leaps, there were critical developments in AI governance and policy this week. These include government regulatory moves and corporate initiatives aimed at steering AI development in a responsible, business-friendly way:
U.S. Regulatory Pushback: In Washington, a controversial proposal to bar U.S. states from regulating AI ignited a political battle. A provision tucked into President Donald Trump’s pending tax bill seeks to impose a 10-year federal preemption on any state or local AI laws. This would nullify dozens of recently enacted state rules on high-risk AI (such as bans on explicit deepfakes and requirements for AI transparency in healthcare). On May 16, bipartisan state attorneys general from 40 states publicly opposed the measure, arguing it “deprives consumers of reasonable protections” in a fast-evolving tech space. California’s AG Rob Bonta – whose state has been proactive in AI oversight – said Washington shouldn’t stop states from “responding to emerging AI technology”. This showdown is significant for businesses: many companies prefer one clear federal framework over a patchwork of state rules, but they also recognize that a complete regulatory vacuum could erode public trust. The outcome (which is still uncertain, as the Senate must decide if the AI ban can remain in a budget bill) will determine whether AI governance in the U.S. proceeds in a centralized or decentralized fashion. For now, companies must keep an eye on both federal legislation and state initiatives (like Illinois’s AI hiring law or New York City’s algorithmic bias audits) – all of which could shape compliance obligations for AI use in hiring, lending, healthcare, and more.
Global Policy & Investment (EU): In Europe, the focus is on both regulating and funding AI. The European Investment Bank (EIB) this week revealed a new project dubbed “Tech EU” aimed at boosting the bloc’s capabilities in AI and semiconductors. The EIB’s president announced an ambitious goal to mobilize €70 billion by 2027 for AI and chip research, as part of a broader plan to raise €250 billion in tech investments long-term. This initiative aligns with the EU’s strategic desire to stay competitive with the U.S. and China in critical technologies. Funds are likely to support startups, academic R&D, and possibly the build-out of European AI cloud infrastructure – all of which could benefit businesses operating in the EU by expanding the local AI talent pool and resources. On the regulatory side, the EU’s landmark AI Act continues its implementation process. While not finalized this week, it’s worth noting that preliminary guidelines were issued in April on how upcoming rules will apply to general-purpose AI models. The AI Act’s first provisions (like bans on certain harmful AI practices) have already taken effect as of early 2025, with full compliance obligations for high-risk systems coming in 2026. Companies in Europe are thus in a transition period – this week many are digesting what the draft Code of Conduct for AI (expected over the summer) will require. In parallel, the Council of Europe is advancing an AI Treaty on human rights and AI; notably, Switzerland announced plans to ratify this convention and adjust its laws accordingly. For business leaders, Europe’s multi-pronged approach means AI products will face stricter rules on transparency, safety, and ethics – but also that funding and guidance will be available to help comply and innovate responsibly.
Corporate AI Governance Initiatives: The private sector is also proactively shaping AI governance this week. OpenAI made a noteworthy announcement with “OpenAI for Countries”, an initiative inviting national governments to partner on building AI infrastructure and crafting policies consistent with democratic values. Announced on May 17, this program essentially offers governments a package: help in setting up local data centers for AI (so that a country’s data and model customization can stay sovereign), access to a customized version of ChatGPT for public services (in the local language and context), and assistance in developing safety and security standards. OpenAI frames this as supporting “democratic AI” rails – meaning AI that enhances freedom, privacy, and competition, as opposed to “authoritarian” uses of AI for surveillance or control. It’s also done in coordination with the U.S. government’s diplomatic efforts. The offer to work with up to 10 countries in a first phase indicates that some nations (possibly in Europe, Latin America, or Asia) could soon launch joint AI projects with OpenAI. For businesses, this is an interesting development: if governments accept, we may see national AI platforms emerge (e.g. a country-specific GPT for education or healthcare) which could open new public-sector markets for AI services. It could also set informal standards – for instance, requiring that AI systems respect certain principles (bias mitigation, human oversight) in order to operate in those jurisdictions. In a related vein, Anthropic’s update to its Responsible Scaling Policy (noted earlier) is another sign of industry self-governance. By refining definitions of security thresholds and explicitly planning safeguards as models get more capable, Anthropic is trying to stay ahead of potential regulation and build trust with enterprise clients. Likewise, this week IBM, Google, Microsoft, and other firms were reported to be deepening their AI ethics committees and auditing processes internally (continuing a trend from prior weeks).
Breakthrough Research
This week also saw breakthrough research results that, while early, point to where AI is heading and how future applications might evolve. A highlight was new work on agentic AI – systems that can autonomously explore solutions – which could fundamentally expand what businesses accomplish with AI in the near future.
AlphaEvolve – AI Designing Algorithms: Google DeepMind unveiled a project called AlphaEvolve that experts are calling a “spectacular” advance in general-purpose AI. Revealed in a research paper on May 14, AlphaEvolve is an AI agent that uses large language models (LLMs) to create and refine computer algorithms in areas like mathematics and software optimization. In practical terms, AlphaEvolve can be given a complex problem – for example, finding a faster way to multiply matrices or improve a chip design – and it will generate candidate solutions in code, test and rank them, then iteratively improve them through a kind of evolutionary loop. Notably, DeepMind has already applied this agent internally with striking results. According to the Nature journal coverage, AlphaEvolve helped design the next generation of Google’s AI chips (TPUs) and managed to find optimizations in Google’s data centers that save about 0.7% of total computing resources. That percentage may sound small, but for Google it is enormously significant (imagine reclaiming 0.7% of a global infrastructure – that’s millions in cost savings and a greener footprint). Moreover, AlphaEvolve solved some open math problems and outperformed human-designed methods on certain coding challenges. Why does this matter for businesses? It suggests a future where AI doesn’t just assist humans in routine tasks, but can innovate on its own to a degree – improving code, discovering more efficient processes, or optimizing designs beyond human intuition. In a sense, we’re seeing the early steps of AI as a research and development agent. While still experimental, companies like Google are likely to fold these techniques into their cloud offerings. In a few years, an enterprise might use a cloud AI service that, say, automatically rewrites parts of its software to make it 20% more efficient, or designs a custom machine learning model architecture tailored to the company’s needs. AlphaEvolve also exemplifies agentic AI safety in practice: it operates in a constrained, feedback-rich environment (generating code and receiving test results), which is a relatively controllable setting. It’s a positive example of letting AI autonomy loose on well-bounded technical problems for big payoffs.
AI “Autonomy” and Agents Trend: Beyond DeepMind’s work, the idea of AI agents that can perform multi-step reasoning and actions is a hot area across the research community. Earlier this year, smaller open-source projects like “Auto-GPT” captured headlines by chaining GPT-4 thoughts to tackle goals (like researching and writing reports) without constant human prompts. While those were often inefficient, they demonstrated the concept’s potential. Now larger players are formalizing the approach. Anthropic, for instance, recently released an agentic coding model called Claude Code (in February) and Microsoft is testing Jarvis-like AI agents that can orchestrate other software tools. Even this week’s TechCrunch report on OpenAI’s Codex noted that CEOs of major tech firms estimate nearly one-third of programming work is now done by AI – effectively treating the AI like an autonomous junior developer. The key research question is how to make these agents reliable, controllable, and able to know when to stop or ask for help (to avoid the “runaway AI” scenario). In that light, it’s notable that Anthropic’s updated policy (RSP 2.2) explicitly addresses advanced “frontier” model risks, including refining rules to guard against misuse by insiders or via model self-modification. This shows top labs are preparing for more powerful agents by shoring up security and alignment measures. Another research thread generating buzz is AI in science and medicine: this week a peer-reviewed study described an AI that autonomously hypothesized new antibiotic molecules – a process that would’ve taken humans much longer. Meanwhile, OpenAI’s recent “super alignment” research agenda (not from this week, but ongoing) aims to devise training techniques so that highly agentic AI systems remain aligned with human values and instructions even as they self-improve. For business leaders, these developments are double-edged. On one hand, more autonomous AI could drastically improve productivity and unlock creative solutions (as AlphaEvolve did). On the other, it raises the importance of governance and oversight: businesses deploying agentic AI will need strong internal policies, verification tests, and perhaps regulatory compliance to ensure the AI’s actions stay within intended bounds. We may soon see certifications or audits specifically for AI agents used in critical workflows, akin to how autonomous vehicles undergo safety testing.
Benchmarking and Limitations: It’s also worth noting that research continues to identify where current AI models fall short. A Microsoft study out this week found that even leading coding AIs (including OpenAI’s and Anthropic’s latest) still struggle with reliably debugging code. They tend to suggest fixes that pass tests in the short run but might introduce subtle bugs – a reminder that human engineers remain in the loop for now. Similarly, new benchmarks on multi-modal AI (vision + language systems) showed improvements but not parity with human-level understanding in complex tasks like medical image analysis. These findings temper the hype and indicate areas where more breakthroughs are needed. However, each limitation identified is also a business opportunity: startups and research teams are quickly tackling these gaps. For example, after discovering coding errors, companies are working on verification tools that automatically double-check AI-written code. The trajectory from research to product is extremely fast in AI – sometimes mere months. A practical case in point: last year’s academic discovery that language models can explain reasoning step-by-step (“chain-of-thought” prompting) has already been productized in at least two enterprise AI tools for better decision traceability.
Swiss Developments
Switzerland had its own flurry of AI news this week, underscoring the country’s efforts to stay at the forefront of AI innovation and governance:
National AI Initiative: Swiss telecom giant Swisscom announced on May 14 that it is joining forces with the country’s top universities to advance trustworthy AI research. Swisscom became a member of the newly formed Swiss National AI Institute (SNAI), a joint initiative by ETH Zurich and EPFL launched to boost Switzerland’s AI capabilities. Through this partnership, Swisscom will collaborate on developing open-source foundation models “made in Switzerland” – including a potential Swiss large language model – and on building the necessary supercomputing infrastructure. The focus is on AI that reflects Swiss values (data privacy, transparency, reliability) and strengthening “Swiss digital sovereignty.” For the Swiss economy, this is a strategic move. It means in the coming years, local businesses and government agencies might have access to domestically developed AI tools optimized for multilingual Swiss contexts (German, French, Italian) and high standards of trust. It could reduce reliance on foreign AI providers for sensitive use cases, while also fostering a homegrown AI talent ecosystem. Swisscom’s involvement also signals industry commitment: as one of the largest ICT companies in Switzerland, it can help translate academic AI advances into real products and services for Swiss customers (for example, AI-enhanced telecom networks or customer service chatbots that are privacy-compliant and Swiss-language fluent).
Workforce Readiness and Corporate Attitudes: A report released in Zurich on May 15 by the Adecco Group highlighted how Swiss and global executives view AI’s impact on work. The survey of 2,000 C-suite leaders (across 13 countries, including Switzerland) found a significant gap between AI ambitions and preparedness. While nearly all leaders expect AI-driven transformation in their business by 2030, only 10% say their organizations are “fully ready” for the disruption. Tellingly, 60% of leaders expect employees to proactively upskill for AI – yet 34% of companies have no formal AI usage policy or training program in place. In Switzerland, known for its highly skilled workforce, this finding resonates: there is high awareness of AI’s importance, but many firms (especially mid-sized ones) haven’t yet implemented concrete plans to train staff or govern AI use. The Adecco report suggests that companies which are “AI-future-ready” tend to invest in continuous learning, data literacy, and human-centric change management, and they outperform others in innovation. This is a wake-up call within the Swiss business community to accelerate internal AI education and strategy alignment. It also complements initiatives like SNAI: as Switzerland produces cutting-edge AI research, its corporations must ensure management and employees can effectively adopt these tools. Swiss firms pride themselves on precision and quality – extending that ethos to AI means establishing clear policies (for example, when is it appropriate to use generative AI with client data, and how to verify its outputs) and not leaving adaptation purely to individual initiative.
Swiss AI Success Stories: On a positive note, several Swiss AI startups and projects were highlighted in local media this week for their progress. For example, Swiss cognitive robotics researchers from ETH demonstrated an AI-driven drone system for mountain rescue that garnered international awards. And in the finance sector, Zürcher Kantonalbank reported early success with a pilot AI system that helps detect fraud patterns in online banking, operating as a smart assistant to compliance officers. While these didn’t make global headlines, they underscore Switzerland’s strengths in applied AI. The country’s combination of strong research institutions, innovation-friendly regulation, and industry-academic collaboration (as exemplified by Swisscom–SNAI) continues to yield practical AI solutions in niches like precision manufacturing, medtech, and environmental monitoring. Swiss startups working on AI for drug discovery, for instance, are attracting notable venture funding as they leverage Switzerland’s pharma hub and AI expertise.
Conclusion
The mid-May 2025 AI news cycle delivered a vivid snapshot of a technology transitioning from novelty to necessity across the business world. We saw concrete use cases flourish – from AI co-pilots in accounting firms and factories, to government agencies streamlining services with generative AI. These real deployments underscore that companies can derive immediate value from AI when it’s carefully targeted at high-impact problems. At the same time, the major AI vendors are in a feature-and-model arms race, each pushing out improvements that will make AI tools faster, more powerful, and more integrated into the workflows of everyday business. For decision-makers, it’s crucial to track these vendor moves, as they will define the capabilities (and choices) you have when selecting AI solutions – whether it’s OpenAI’s latest coding assistant, Anthropic’s reliable enterprise chatbot, or Nvidia-powered infrastructure becoming available in new regions.
The frontier research reminds us that the AI of tomorrow will eclipse what we have today. The emergence of agentic AI – systems that can perform non-trivial tasks autonomously – could be as transformative to knowledge work as automation was to manufacturing. This week’s breakthroughs hint at AI not just as a tool, but as a potential creator and optimizer within your business. That opens exciting opportunities for innovation and efficiency, but also demands vigilance to ensure such powerful capabilities are deployed safely and ethically.
The overarching message from this week’s AI news is one of accelerating maturity. AI is steadily moving from the lab to the field, from pilot projects to enterprise-wide rollouts, and from unregulated algorithms to governed ecosystems. For business leaders, staying informed is no longer sufficient – it’s time to formulate or refine your own AI strategy. That means asking: Where can AI drive value in my organization today? How do we navigate the vendor landscape to choose the right partners? Do we have the policies and talent in place to use AI responsibly? And what coming breakthroughs (like autonomous AI agents) should we be ready to leverage or address? The companies and countries taking action on these fronts, as evidenced this week, are positioning themselves to thrive in the next phase of the AI revolution. Those who hesitate may find themselves playing catch-up in a year’s time.
Sources
Consulting Magazine – Crowe expands firmwide use of ChatGPT Enterprise (May 13, 2025).
NTT Data Press Release – Launch of Smart AI Agent ecosystem and industry deployments (May 16, 2025).
TechCrunch / OpenAI – Introduction of Codex coding agent and GPT-4.1 model update (May 16–17, 2025).
Reuters – CoreWeave signs $4 billion cloud deal with OpenAI (May 15, 2025).
Medium (Tim McAllister) – New ChatGPT features including shopping integration (May 17, 2025).
PYMNTS / CNBC – Anthropic secures $2.5B credit line, $2B revenue run-rate (May 16, 2025).
Reuters – xAI updates Grok after unauthorized prompt change incident (May 17, 2025).
NVIDIA press release – Partnership with HUMAIN to build AI supercomputing centers (May 13, 2025).
Xinhua/Global Times – Tencent Q1 2025 results, 21% R&D jump and global AI infrastructure plans (May 18, 2025).
Reuters – U.S. state AGs oppose federal ban on state AI laws in Trump bill (May 16, 2025).
Reuters – European Investment Bank plans €70B AI/chip investment project (May 17, 2025).
OpenAI Blog – Launch of “OpenAI for Countries” to promote democratic AI (May 17, 2025).
Nature News – DeepMind’s AlphaEvolve AI designs algorithms, improves Google TPU and data center efficiency (May 14, 2025).
SiliconANGLE – Technical details on AlphaEvolve’s iterative code-generation process (May 14, 2025).
TechCrunch – Context on industry coding AI usage and Microsoft study on AI debugging limits (May 16, 2025).
Swisscom Press Release – Joining Swiss National AI Institute to develop Swiss AI models (May 14, 2025).
Adecco Group Report – Global survey of C-suite on AI readiness (May 15, 2025).