The Week Microsoft Broke Email for Everyone

It started with S3150. Then the rate limiting hit.

Why the February 2026 Outlook crisis goes beyond a single error code, and what it reveals about the fragility of email delivery at scale.


On February 21, we published a deep dive into Microsoft's S3150 error code, documenting how operators were receiving permanent 550 rejections from IPs that Microsoft's own SNDS portal showed as clean, how the delist process contradicted reality, and how transactional mail was becoming collateral damage. That article, which Laura Atkins subsequently shared on Word to the Wise, resonated because it described a problem that hundreds of operators recognized from their own experience.

What we did not anticipate was that the situation would escalate dramatically within 48 hours. Starting on February 23, a new and broader crisis hit the Outlook consumer mail infrastructure. This time, it was not limited to S3150 hard blocks. Microsoft deployed aggressive rate limiting that swept across the industry, catching enterprise email security platforms, dedicated hosting providers, SaaS transactional senders, and small mail operators in the same net. The fallout played out simultaneously across the mailop mailing list, the Email Geeks Slack community, Reddit, and Microsoft's own support forums, making it one of the largest cross-platform email delivery incidents the industry has seen in years.

This article picks up where our S3150 analysis left off. Everything described here is based on publicly observable community discussions from the final week of February 2026.


From Hard Blocks to Broad Rate Limiting

The S3150 pattern we documented on February 21 was specific: a sender receives a permanent 550 rejection referencing the S3150 error code, checks the delist portal, gets told they are not blocked, and watches the rejections continue anyway. That pattern had been playing out since at least late January, and the community discussion around it was already heated enough to require moderator intervention on mailop.

On February 23, something shifted. Operators began reporting a different class of error entirely. Instead of, or in addition to, the familiar S3150 permanent rejections, they started receiving 451 4.7.650 temporary failures citing IP reputation. The error referenced Microsoft's Protocol Filter Agent and included the S775 internal code. The operator who first raised the alarm on mailop noted that their outbound spam prevention was among the strictest of any provider, that every feedback loop complaint was actioned immediately, and that there was no significant complaint volume. Despite all of that, the rate limiting arrived without warning.

Within hours, confirmations came in from across the industry. A German organization reported being affected since approximately 19:20 UTC that evening. An experienced deliverability operator confirmed the pattern had been ongoing in waves since January, noting that the earlier S3150 hard blocks had calmed somewhat but were now replaced by a broader set of rate limiting errors including S775 and S3114, alongside 451 4.7.500 server busy rejections coded AS750. That same operator made a particularly striking observation: even IPs enrolled in a major third-party sender certification program were being rate limited. When a support ticket was opened with Microsoft, the response was to contact the certification provider instead. The sender was effectively caught in a referral loop with no direct path to resolution.


Consumer Domains Only, but the Impact Was Universal

One important technical detail that emerged early in the discussion helps narrow down the scope of the problem. The rate limiting and blocking issues during this wave were specific to Microsoft's consumer email domains: outlook.com, hotmail.com, live.com, and related properties. Mail flowing to M365 and Entra-hosted tenant domains did not appear to be affected.

This distinction is more than academic. It suggests the issue sits within the filtering infrastructure that serves Microsoft's free consumer mailbox service, not within the broader Exchange Online Protection stack. It also aligns with what Microsoft's own community representative hinted at on mailop when they noted that their knowledge was more current on the business side of the house. The consumer and business email filtering operations at Microsoft appear to function as largely separate systems, even though they share overlapping infrastructure and tooling.

For anyone monitoring email delivery, this has an immediate practical consequence. An aggregate "Microsoft" delivery metric in your ESP dashboard could easily show 95% while your consumer domain delivery is actually being rate limited at 0%. The strong corporate-side performance masks a complete consumer-side block. Password resets, signup confirmations, and two-factor authentication codes to outlook.com and hotmail.com addresses simply stop arriving, and your dashboard never raises an alarm.


The Scale of the Problem

What made the February 23 wave different from earlier S3150 incidents was the sheer breadth of affected senders. This was not a case of a few under-maintained servers getting caught by aggressive filtering. The reports came from every segment of the email ecosystem, often simultaneously and through independent channels.

On Reddit's r/proofpoint community, an operator reported that their Proofpoint Protection Server IPs started getting deferred on February 24 with the same 451 4.7.650 error. They submitted a request through Microsoft's OLC support portal and received the standard automated response claiming no issues were detected. Others in the thread confirmed that Mimecast IPs were seeing the same behavior. One commenter recounted being on a call with Microsoft where the representative said she was not aware of other companies having the issue, at which point they pointed to the Reddit thread documenting the exact same problem across multiple organizations.

In the Email Geeks Slack community, a deliverability professional reported seeing the identical 451 4.7.650 S775 error on transactional password reset emails being sent through dedicated IPs via SendGrid. A participant from a major MTA vendor cut through the initial troubleshooting with a direct observation: there have been widespread issues with Microsoft, and the problem might not actually be on the sender's side. Others confirmed they were seeing previously clean IPs being flagged across both dedicated and shared infrastructure.

On Microsoft's own Learn community forums, a thread titled "All sending IPs temporarily rate limited (451 4.7...)" accumulated reports from operators using Mailgun, SendGrid, self-hosted infrastructure, and other platforms. One SaaS operator who had been sending from the same dedicated IP for years, with SPF, DKIM, and DMARC properly configured and no increase in complaints, reported that every email to Microsoft consumer domains had been blocked since February 24. They had appealed through the OLC portal and received the same familiar response: no issue detected.

The common thread across all of these reports is worth emphasizing. These were not new IPs still building reputation. These were not senders with a history of complaint issues. These were long-established, properly authenticated, low-complaint senders using dedicated infrastructure through reputable platforms, and they were all being rate limited at the same time, starting around the same date. German tech publication Borncity also picked up the story, broadening the discussion beyond the English-speaking email operations community.


Microsoft Acknowledges the Problem

At some point during the week, Microsoft added a yellow warning banner to the top of their OLC support portal at olcsupport.office.com. The message stated that they were aware of an issue that may result in certain IP addresses being temporarily rejected at higher rates, that they were actively investigating, and that senders should continue to submit tickets.

The acknowledgment itself was notable. Microsoft rarely issues public statements about delivery issues on their consumer mail platforms. But the timing drew immediate attention in the community. As one deliverability professional observed in the Email Geeks Slack, the banner appeared after operators had already started reporting that delivery was improving. The worst of the rate limiting wave hit between February 23 and February 25. Several operators reported that queues began draining on the evening of February 25, and the original mailop thread starter confirmed that the issue appeared fully resolved for their network by that evening. Crucially, the resolution did not appear to be in response to individual support tickets. It appeared to be a fix deployed on Microsoft's side, applied network-wide.

For some operators, the standard OLC process eventually worked. One operator on Reddit reported that after the initial "no issues detected" response, they replied with additional details and received a second response approximately 23 hours later confirming that throttling limitations had been adjusted. About two hours after that message, emails started flowing again. The total time from initial ticket submission to resolution was roughly 25 hours.

But the experience was not uniform. Another operator in the Email Geeks Slack noted that delivery looked better for a couple of days after the initial wave but then became unreliable again. The S3150 hard blocks documented in our earlier article and the broader rate limiting wave may represent different but overlapping filtering mechanisms, which would explain why some senders recovered while others continued to experience intermittent issues.


The Contradiction Between Error Codes and Reality

A recurring theme in our S3150 article was the mismatch between what Microsoft's systems communicate and what is actually happening. That contradiction deepened significantly during the rate limiting wave.

When the S3150 "mirage" discussion gained momentum on mailop during the week of February 20, a representative from Microsoft's spam analysis team responded that S3150 represents throttling and that the advice was to slow down deliveries. The affected sender replied that their total volume to all Outlook servers for the entire day was fewer than 1,500 messages, and asked two questions that captured the core frustration: if this was a rate limit, where was it documented so systems could be configured appropriately? And if this was a temporary throttling issue, why was Microsoft returning a permanent 5xx error code instead of a temporary 4xx?

Microsoft's representative clarified that the thresholds are IP-specific and dynamically assigned, and that S3150 is triggered when a sender fails to slow down after receiving 4xx warnings. A hosting provider then challenged this explanation with hard data. They checked their mail logs going back to December 2025 and found exactly zero 421 temporary errors from Outlook across their entire network. Not on one IP. Not on some IPs. Zero, network-wide. The only errors present were permanent 550 rejections with S3150. No temporary warnings preceded them.

This is significant. If the intended behavior is that senders should slow down after receiving temporary errors, but those temporary errors are never issued before the permanent block, then the system is not functioning as designed. Senders cannot react to warning signals they never receive. The hosting provider's response, which described the implementation as feeling "half-baked," was blunt but technically grounded.

The same pattern appeared in the OLC support process. Across all four channels where the February crisis was discussed, the experience was remarkably consistent: submit a request, receive an automated response claiming no issues are detected, reply to that response to escalate to a human, and then either receive an actual mitigation or wait for the problem to resolve on its own. As one operator summarized it on mailop: the first step is to open a ticket, the second step is to receive a response from a bot that says everything is fine when it is not, the third step is to reply again and hope your case reaches someone who can help.


What This Means for Email Operations Teams

The February 2026 Outlook crisis, taken together with the ongoing S3150 frustrations, carries several practical implications for anyone responsible for email delivery.

Microsoft's consumer mail filtering has become meaningfully less predictable than it was even six months ago. The error codes do not reliably indicate the nature of the problem. The self-service tools do not reliably reflect the actual state of your sending reputation. And the support process requires persistence and escalation before reaching someone who can take action. None of this is new in isolation, but the frequency, scale, and cross-platform nature of the February incidents represents a step change from earlier episodes.

No sender category proved immune during this wave. It affected enterprise email security platforms with some of the most carefully managed IP infrastructure in the industry. It affected senders with third-party certification. It affected operators sending fewer than 1,500 messages per day and operators sending millions. Stable delivery to Microsoft consumer domains today is not a guarantee of stable delivery tomorrow, and the lack of documented thresholds means there is no way to proactively stay beneath a line that has never been drawn.

The monitoring implications are perhaps the most actionable takeaway. Most ESP dashboards aggregate delivery metrics at the provider level, and many do not distinguish between Microsoft's consumer and business platforms. The February crisis was invisible to any monitoring system that does not break out consumer domain delivery as a separate metric. Real-time, per-domain delivery monitoring is no longer a luxury. It is the difference between discovering a Microsoft consumer block in minutes and discovering it days later when users report that their password reset emails never arrived.


Looking Forward

Microsoft has acknowledged the issue and appears to have deployed fixes during the week of February 25. Several operators report that delivery has normalized, though others note the situation remains inconsistent.

What is clear is that the email operations community's frustration with Microsoft's consumer mail platform has reached a level that is qualitatively different from earlier complaints. The February wave prompted discussions about anti-competitive behavior, sparked a formal "IN DEFENSE of Microsoft" counter-thread on mailop (the very existence of which says something about the temperature of the conversation), and generated simultaneous discussion across every major channel where email professionals gather.

The industry has been patient. Operators understand that filtering at Microsoft's scale is genuinely difficult. But patience requires trust, and trust requires functional feedback loops. When error codes misclassify the problem, when SNDS shows green while mail is being rejected, when the delist portal says you are not blocked while you clearly are, and when the support process consistently starts with a bot telling you nothing is wrong, the feedback loop is broken. Fixing it is not a goodwill gesture. It is a prerequisite for the cooperative relationship between senders and receivers that makes email work.


This post is based on publicly observable discussions from the mailop mailing list, the Email Geeks Slack community, Reddit, and Microsoft's Learn community forums, spanning the period of February 20 through February 27, 2026. No private communications were used. For background on the S3150 error code that preceded this broader crisis, see our earlier analysis: Microsoft's S3150: The Most Frustrating Error Code in Email Deliverability.

Engagor monitors deliverability signals across all major mailbox providers with per-domain granularity, distinguishing between consumer and corporate Microsoft domains in real time. If February's crisis would have been invisible to your current monitoring stack, that is exactly the problem we built our platform to solve.

BV
About the author

Bram Van Daele

Founder & CEO

Bram has been working in email deliverability since 1998. He founded Teneo in 2007, which has become Europe's leading email deliverability consultancy. Engagor represents 27 years of hands-on expertise encoded into software.

Connect on LinkedIn →