Our 2025 Accountable AI Transparency Report: How we construct, help our prospects, and develop


In Might 2024, we launched our inaugural Accountable AI Transparency Report. We’re grateful for the suggestions we acquired from our stakeholders around the globe. Their insights have knowledgeable this second annual Accountable AI Transparency Report, which underscores our continued dedication to constructing AI applied sciences that folks belief. Our report highlights new developments associated to how we construct and deploy AI methods responsibly, how we help our prospects and the broader ecosystem, and the way we be taught and evolve. 

The previous yr has seen a wave of AI adoption by organizations of all sizes, prompting a renewed focus on efficient AI governance in follow. Our prospects and companions are desperate to study how we now have scaled our program at Microsoft and developed instruments and practices that operationalize high-level norms. 

Like us, they’ve discovered that constructing reliable AI is nice for enterprise, and that good governance unlocks AI alternatives. In accordance with IDC’s Microsoft Accountable AI Survey that gathered insights on organizational attitudes and the state of accountable AI, over 30% of the respondents word the shortage of governance and threat administration options as the highest barrier to adopting and scaling AI. Conversely, extra than 75% of the respondents who use accountable AI instruments for threat administration say that they’ve helped with knowledge privateness, buyer expertise, assured enterprise choices, model repute, and belief.

We’ve additionally seen new regulatory efforts and legal guidelines emerge over the previous yr. As a result of we’ve invested in operationalizing accountable AI practices at Microsoft for near a decade, we’re properly ready to comply with these rules and to empower our prospects to do the identical. Our work right here is just not achieved, nonetheless. As we element within the report, environment friendly and efficient regulation and implementation practices that help the adoption of AI expertise throughout borders are nonetheless being outlined. We stay centered on contributing our sensible insights to standard- and norm-setting efforts around the globe. 

Throughout all these aspects of governance, it’s necessary to stay nimble in our strategy, making use of learnings from our real-world deployments, updating our practices to replicate advances within the state-of-the-art, and making certain that we’re conscious of suggestions from our stakeholders. Learnings from our principled and iterative strategy are mirrored within the pages of this report. As our governance practices proceed to evolve, we’ll proactively share our recent insights with our stakeholders, each in future annual transparency reviews and different public settings.

Key takeaways from our 2025 Transparency Report 

In 2024, we made key investments in our accountable AI instruments, insurance policies, and practices to maneuver on the pace of AI innovation.

    1. We improved our accountable AI tooling to supply expanded threat measurement and mitigation protection for modalities past textual content—like photos, audio, and video—and extra help for agentic methods, semi-autonomous methods that we anticipate will symbolize a major space of AI funding and innovation in 2025 and past. 
    2. We took a proactive, layered strategy to compliance with new regulatory necessities, together with the European Union’s AI Act, and offered our prospects with assets and supplies that empower them to innovate in keeping with related rules. Our early investments in constructing a complete and industry-leading accountable AI program positioned us properly to shift our AI regulatory readiness efforts into excessive gear in 2024. 
    3. We continued to use a constant threat administration strategy throughout releases via our pre-deployment overview and pink teaming efforts. This included oversight and overview of high-impact and higher-risk makes use of of AI and generative AI releases, together with each flagship mannequin added to the Azure OpenAI Service and each Phi mannequin launch. To additional help accountable AI documentation as a part of these critiques, we launched an inside workflow device designed to centralize the assorted accountable AI necessities outlined within the Accountable AI Normal. 
    4. We continued to supply hands-on counseling for high-impact and higher-risk makes use of of AI via our Delicate Makes use of and Rising Applied sciences crew. Generative AI functions, particularly in fields like healthcare and the sciences, had been notable progress areas in 2024. By gleaning insights throughout instances and interesting researchers, the crew offered early steerage for novel dangers and rising AI capabilities, enabling innovation and incubating new inside insurance policies and tips. 
    5. We continued to lean on insights from analysis to tell our understanding of sociotechnical points associated to the most recent developments in AI. We established the AI Frontiers Lab to put money into the core applied sciences that push the frontier of what AI methods can do by way of functionality, effectivity, and security.  
    6. We labored with stakeholders around the globe to make progress in the direction of constructing coherent governance approaches to assist speed up adoption and permit organizations of every kind to innovate and use AI throughout borders. This included publishing a guide exploring governance throughout varied domains and serving to advance cohesive requirements for testing AI methods.

Looking forward to the second half of 2025 and past 

As AI innovation and adoption proceed to advance, our core goal stays the identical: incomes the belief that we see as foundational to fostering broad and helpful AI adoption around the globe. As we proceed that journey over the following yr, we are going to focus on three areas to progress our steadfast dedication to AI governance whereas making certain that our efforts are conscious of an ever-evolving panorama: 

  1. Growing extra versatile and agile threat administration instruments and practices, whereas fostering abilities improvement to anticipate and adapt to advances in AI. To make sure individuals and organizations around the globe can leverage the transformative potential of AI, our means to anticipate and handle the dangers of AI should preserve tempo with AI innovation. This requires us to construct instruments and practices that may rapidly adapt to advances in AI capabilities and the rising variety of deployment situations that every have distinctive threat profiles. To do that, we will make better investments in our methods of threat administration to supply instruments and practices for the commonest dangers throughout deployment situations, and likewise allow the sharing of check units, mitigations, and different greatest practices throughout groups at Microsoft.
  2. Supporting efficient governance throughout the AI provide chain. Constructing, incomes, and conserving belief in AI is a collaborative endeavor that requires mannequin builders, app builders, and system customers to every contribute to reliable design, improvement, and operations. AI rules, together with the EU AI Act, replicate this want for info to circulation throughout provide chain actors. Whereas we embrace this idea of shared duty at Microsoft, we additionally acknowledge that pinning down how tasks match collectively is complicated, particularly in a fast-changing AI ecosystem. To assist advance shared understanding of how this could work in follow, we’re deepening our work internally and externally to make clear roles and expectations.
  3. Advancing a vibrant ecosystem via shared norms and efficient instruments, notably for AI threat measurement and analysis. The science of AI threat measurement and analysis is a rising however nonetheless nascent subject. We’re dedicated to supporting the maturation of this subject by persevering with to make investments inside Microsoft, together with in analysis that pushes the frontiers of AI threat measurement and analysis and the tooling to operationalize it at scale. We stay dedicated to sharing our newest developments in tooling and greatest practices with the broader ecosystem to help the development of shared norms and requirements for AI threat measurement and analysis.

We stay up for listening to your suggestions on the progress we now have made and alternatives to collaborate on all that’s nonetheless left to do. Collectively, we will advance AI governance effectively and successfully, fostering belief in AI methods at a tempo that matches the alternatives forward. 
Discover the 2025 Accountable AI Transparency Report 

Tags: , ,

Related Articles

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Latest Articles