Asserting new fine-tuning fashions and methods in Azure AI Foundry


Immediately, we’re excited to announce two main enhancements to mannequin fine-tuning in Azure AI Foundry—Reinforcement Effective-Tuning (RFT) with o4-mini, coming quickly, and Supervised Effective-Tuning (SFT) for the 4.1-nano mannequin, out there now.

Immediately, we’re excited to announce three main enhancements to mannequin fine-tuning in Azure AI Foundry—Reinforcement Effective-Tuning (RFT) with o4-mini (coming quickly), Supervised Effective-Tuning (SFT) for the GPT-4.1-nano and Llama 4 Scout mannequin (out there now). These updates replicate our continued dedication to empowering organizations with instruments to construct extremely custom-made, domain-adapted AI techniques for real-world influence. 

With these new fashions, we’re unblocking two main avenues of LLM customization: GPT-4.1-nano is a robust small mannequin, best for distillation, whereas o4-mini is the primary reasoning mannequin you’ll be able to fine-tune, and Llama 4 Scout is a best-in-class open supply mannequin. 

Reinforcement Effective-Tuning with o4-mini 

Reinforcement Effective-Tuning introduces a brand new degree of management for aligning mannequin habits with advanced enterprise logic. By rewarding correct reasoning and penalizing undesirable outputs, RFT improves mannequin decision-making in dynamic or high-stakes environments.

Coming quickly for the o4-mini mannequin, RFT unlocks new potentialities to be used circumstances requiring adaptive reasoning, contextual consciousness, and domain-specific logic—all whereas sustaining quick inference efficiency.

Actual world influence: DraftWise 

DraftWise, a authorized tech startup, used reinforcement fine-tuning (RFT) in Azure AI Foundry Fashions to reinforce the efficiency of reasoning fashions tailor-made for contract era and assessment. Confronted with the problem of delivering extremely contextual, legally sound ideas to legal professionals, DraftWise fine-tuned Azure OpenAI fashions utilizing proprietary authorized knowledge to enhance response accuracy and adapt to nuanced consumer prompts. This led to a 30% enchancment in search consequence high quality, enabling legal professionals to draft contracts quicker and give attention to high-value advisory work. 

Reinforcement fine-tuning on reasoning fashions is a possible sport changer for us. It’s serving to our fashions perceive the nuance of authorized language and reply extra intelligently to advanced drafting directions, which guarantees to make our product considerably extra helpful to legal professionals in actual time.

—James Ding, founder and CEO of DraftWise.

When must you use Reinforcement Effective-Tuning?

Reinforcement Effective-Tuning is finest fitted to use circumstances the place adaptability, iterative studying, and domain-specific habits are important. You need to contemplate RFT in case your situation includes: 

  1. Customized Rule Implementation: RFT thrives in environments the place resolution logic is very particular to your group and can’t be simply captured by means of static prompts or conventional coaching knowledge. It allows fashions to be taught versatile, evolving guidelines that replicate real-world complexity. 
  1. Area-Particular Operational Requirements: Ideally suited for situations the place inner procedures diverge from trade norms—and the place success is determined by adhering to these bespoke requirements. RFT can successfully encode procedural variations, equivalent to prolonged timelines or modified compliance thresholds, into the mannequin’s habits. 
  1. Excessive Resolution-Making Complexity: RFT excels in domains with layered logic and variable-rich resolution timber. When outcomes rely on navigating quite a few subcases or dynamically weighing a number of inputs, RFT helps fashions generalize throughout complexity and ship extra constant, correct choices. 

Instance: Wealth advisory at Contoso Wellness 

To showcase the potential of RFT, contemplate Contoso Wellness, a fictitious wealth advisory agency. Utilizing RFT, the o4-mini mannequin discovered to adapt to distinctive enterprise guidelines, equivalent to figuring out optimum consumer interactions primarily based on nuanced patterns just like the ratio of a consumer’s web value to out there funds. This enabled Contoso to streamline their onboarding processes and make extra knowledgeable choices quicker.

Supervised Effective-Tuning now out there for GPT-4.1-nano 

We’re additionally bringing Supervised Effective-Tuning (SFT) to the GPT-4.1-nano mannequin—a small however highly effective basis mannequin optimized for high-throughput, cost-sensitive workloads. With SFT, you’ll be able to instill your mannequin with company-specific tone, terminology, workflows, and structured outputs—all tailor-made to your area. This mannequin might be out there for fine-tuning within the coming days. 

Why Effective-tune GPT-4.1-nano? 

  • Precision at Scale: Tailor the mannequin’s responses whereas sustaining pace and effectivity. 
  • Enterprise-Grade Output: Guarantee alignment with enterprise processes and tone-of-voice. 
  • Light-weight and Deployable: Excellent for situations the place latency and value matter—equivalent to customer support bots, on-device processing, or high-volume doc parsing. 

In comparison with bigger fashions, 4.1-nano delivers quicker inference and decrease compute prices, making it effectively fitted to large-scale workloads like: 

  • Buyer assist automation, the place fashions should deal with hundreds of tickets per hour with constant tone and accuracy. 
  • Inner data assistants that observe firm type and protocol in summarizing documentation or responding to FAQs. 

As a small, quick, however extremely succesful mannequin, GPT-4.1-nano makes an awesome candidate for distillation as effectively. You should utilize fashions like GPT-4.1 or o4 to generate coaching knowledge—or seize manufacturing site visitors with saved completions—and educate 4.1-nano to be simply as sensible!

Fine-tune gpt-4.1-nano demo in Azure AI Foundry.

Llama 4 Effective-Tuning now out there 

We’re additionally excited to announce assist for fine-tuning Meta’s Llama 4 Scout—a innovative,17 billion energetic parameter mannequin which provides an trade main context window of 10M tokens whereas becoming on a single H100 GPU for inferencing. It’s a best-in-class mannequin, and extra highly effective than all earlier era llama fashions. 

Llama 4 fine-tuning is accessible in our managed compute providing, permitting you to fine-tune and inference utilizing your personal GPU quota. Obtainable in each Azure AI Foundry and as Azure Machine Studying parts, you may have entry to further hyperparameters for deeper customization in comparison with our serverless expertise.

Get began with Azure AI Foundry right now

Azure AI Foundry is your basis for enterprise-grade AI tuning. These fine-tuning enhancements unlock new frontiers in mannequin customization, serving to you construct clever techniques that suppose and reply in ways in which replicate your enterprise DNA.

  • Use Reinforcement Effective-tuning with o4-mini to construct reasoning engines that be taught from expertise and evolve over time. Coming quickly in Azure AI Foundry, with regional availability for East US2 and Sweden Central. 
  • Use Supervised Effective-Tuning with 4.1-nano to scale dependable, cost-efficient, and extremely custom-made mannequin behaviors throughout your group. Obtainable now in Azure AI Foundry in North Central US and Sweden Central. 
  • Attempt Llama 4 scout fantastic tuning to customise a best-in-class open supply mannequin. Obtainable now in Azure AI Foundry mannequin catalog and Azure Machine Studying. 

With Azure AI Foundry, fine-tuning isn’t nearly accuracy—it’s about belief, effectivity, and flexibility at each layer of your stack. 

Discover additional: 

We’re simply getting began. Keep tuned for extra mannequin assist, superior tuning methods, and instruments that can assist you construct AI that’s smarter, safer, and uniquely yours. 



Related Articles

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Latest Articles