Farrah Ferris ~ Signal Over Noise

Farrah Ferris ~ Signal Over Noise

THE THIRD DIMENSION: WHEN THE PERMISSIONING REGIME REACHES THE AGENTIC SUBSTRATE

SIGNAL BURST: May 13, 2026 - Detailed - 30 Minute Presentation Separate From Article.

Farrah Ferris's avatar
Farrah Ferris
May 13, 2026
∙ Paid

Note from the Author.

This Signal Burst sits at the intersection of intelligence research and creator-economy analysis, so the read carries technical AI vocabulary, agentic-architecture terminology, and information-warfare framing that may land differently than standard market or geopolitical coverage — that crossover is intentional.

This format is approximately 3,400 words because I felt the need to include data statistics relevant to the topic. I Justified this via a six-vector convergence: consumer agentic AI deployment at the two largest distribution-platform operators, first verified in-the-wild AI-generated zero-day with full state and criminal actor enumeration, active 9-country hantavirus repatriation, Day 42+ US aviation crisis with concurrent EU strikes, fourth round US-Iran talks with declared “on life support” ceasefire, and May 14-15 Beijing summit doctrine-legibility window.

Avatar Note: Today’s Signal Burst incorporates words and acronyms that have never been used or articulated before by my cloned avatar. She is still in voice training and language learning. We are watching her advancement in real time.

Signal Burst will run lighter next episode as a deliberate counterweight to today's heavy density.

Follow-up noon article will focus on the hantavirus trigger vocabulary architecture: The 2020 retrieval payloads embedded in words like virus, airplane, quarantine, repatriation, biocontainment, and person-to-person, and the operational signature of their amplification 12 hours before the Beijing summit window opens. The next Signal Burst will address the early post-summit read from Beijing — what the joint communiqué said, what it pointedly did not say, and which of the May 12 doctrine-legibility tripwires actually tripped.


THE THIRD DIMENSION: WHEN THE PERMISSIONING REGIME REACHES THE AGENTIC SUBSTRATELEDE


LEDE

On May 4, 2026, Google shut down Project Mariner — its browser-driving agent experiment — and rolled the team into a new product codenamed Remy, a 24/7 personal agent that lives inside the Gemini app and reaches into Gmail, Calendar, Docs, Drive, Keep, Tasks, and Google Photos plus third-party hooks into GitHub, WhatsApp, and Spotify.

The same week, The Information reported that Meta is building a consumer AI agent called Hatch, currently powered by Anthropic’s Claude during development, scheduled for internal testing by the end of June, with Meta planning to replace Claude with its own in-house Muse Spark model at launch.

The trigger event for both products was OpenClaw, a free agentic browser tool released in January 2026 by Austrian developer Peter Steinberger that reached 3.2 million users in weeks. OpenClaw became publicly notorious in February when it deleted the entire Gmail inbox of Summer Yue, Director of Safety and Alignment at Meta Superintelligence Lab, while she begged it to stop in real time with messages including “Do not do that,” “Stop don’t do anything,” and “STOP OPENCLAW.” The agent kept deleting.

On May 11, Google Threat Intelligence Group published the first verified in-the-wild zero-day exploit developed using a large language model. The Python script bypassed two-factor authentication on a popular open-source web administration tool.

The same disclosure enumerated North Korean APT45 sending thousands of recursive prompts to AI models for industrial-scale CVE analysis; China-linked UNC2814 using expert-persona jailbreak prompts to research TP-Link router pre-authentication remote code execution; APT27, UNC5673, and UNC6201 in various AI-assisted vulnerability research; criminal group TeamPCP compromising the LiteLLM AI gateway library through poisoned PyPI packages in March 2026; Russian “Overload” operation using AI voice cloning of real journalists in fabricated videos; and a Claude Code skill plugin on GitHub called wooyun-legacy that distills 85,000 vulnerability cases from the Chinese WooYun bug bounty platform to steer AI model analysis toward expert security researcher patterns.

Researchers at the CISPA Helmholtz Center for Information Security identified 17 shadow API relay services siphoning prompts and responses while degrading Gemini-2.5-flash medical benchmark accuracy from 83.82 percent on the official API to approximately 37 percent through shadow access. A China grey market for unauthorized Anthropic Claude and Gemini access via industrialized account-creation pipelines is operational.

The same May 11, the Andes hantavirus repatriation completed at Tenerife with eleven cases, three deaths, and seven US states monitoring returning passengers. The same day, the fourth round of US-Iran negotiations ended in Oman, after which President Trump told reporters the ceasefire is “on life support” and described Iran’s proposal as “garbage” that he “didn’t even finish reading.” Forty-eight hours from this publication anchor, the President arrives in Beijing for the May 14-15 summit with Xi Jinping.

Six operational vectors. One news cycle. One 48-hour window before a doctrine-legibility moment.

The convergence is the signal.


DIMENSION ONE: PHYSICAL WATER REMAINS INSTITUTIONAL

The Persian Gulf Strait Authority continues to operate. The 12-article statute titled “Law on Establishing Iran’s Sovereignty over the Strait of Hormuz” was ratified by Iran’s parliamentary National Security and Foreign Policy Committee on April 21, 2026, and awaits full chamber vote.

The bill’s provisions are deliberately structured to survive any negotiated settlement: Israeli vessels banned under any circumstances; vessels from “hostile” nations require Supreme National Security Council approval; ships from countries deemed to have “damaged Iran” denied passage until compensation is paid in full; all transit fees denominated in Iranian rial; shipping documents required to use the term “Persian Gulf.” Approximately 1,500 commercial vessels are reported stranded in the Persian Gulf as of May 7-8.

The U.S. financial-enforcement complement was issued on May 1, 2026. The Office of Foreign Assets Control advisory warns that payments to Iran for Strait passage expose both US and non-US persons to sanctions risk, regardless of payment method — cash, digital assets, in-kind transfers, charitable donations to Iranian-linked entities including the Iranian Red Crescent Society. Non-US firms facilitating such payments face potential secondary sanctions, including restrictions on access to the US financial system.

The advisory closes the payment-method loophole that earlier IRGC Foreign Terrorist Organization designation could not address on its own. Together, the FTO designation and the OFAC May 1 advisory form a closing two-front legibility test: the IRGC remains a US-designated foreign terrorist organization, and any payment that reaches the IRGC-controlled transit system — through any channel — exposes the payer to sanctions enforcement.

The fourth round of US-Iran talks completed in Oman on May 11. President Trump’s public characterization is operational data. “I would call it the weakest right now, after reading that piece of garbage they sent us. I didn’t even finish reading it.” He told reporters the ceasefire is “on life support.” This is the verbal register from the U.S. side 48 hours before the Beijing summit at which Iran is one of the three stated agenda priorities alongside Taiwan and energy.

China’s pre-summit positioning includes a first-time invocation of the People’s Republic of China “blocking rule” against US sanctions on Chinese refiners purchasing Iranian crude. The Council on Foreign Relations analysis is direct: the US Navy is blockading the Strait of Hormuz and intercepting tankers bound for China, Iran’s largest crude buyer, while Beijing is providing political and possibly intelligence support to Tehran and could be seeking to renew flows of drone parts, air defense equipment, and missiles. The CFR notes the rare-earth-and-magnets leverage history: in April and October 2025, Xi successfully forced Trump tariff retreat past 140 percent through rare earth and magnet supply throttling. The leverage layer is operational and the pre-summit press is documenting it.


DIMENSION TWO: DIGITAL INFRASTRUCTURE REMAINS IN ADVOCACY

The May 9 IRGC-linked Tasnim and Fars proposal extending the toll-booth doctrine to the seven undersea fiber-optic cables transiting the Strait has not yet operationalized. The maritime template predicts compression to institutional phase faster than the maritime phase compressed itself. The Saudi-UAE-Qatar consortium for cable diversification — covering AAE-1, FALCON, TGN-Gulf, and SEA-ME-WE family routing — is underway as a regional defensive response to potential institutionalization.

The May 10 cycle architecture mapped landing stations and data hubs serving UAE, Qatar, Bahrain, Kuwait, and Saudi Arabia, all identified by Tasnim’s April mapping report as strategic pressure points. The audit imperative for every organization with bandwidth dependency on Hormuz-transiting cables is unchanged from the May 10 cycle and gains a new analytical pressure: the same Beijing summit that returns data on the maritime dimension returns data on the cable phase legibility.

China is the largest single beneficiary of any future Iranian cable toll structure if routes remain open, and the largest single victim if they do not. The summit will produce either a joint communiqué containing language legitimating or rejecting Iranian permitting of digital infrastructure, or a readout that says nothing on the question. Either outcome is data.


DIMENSION THREE: THE AGENTIC SUBSTRATE IS NOW THE OPERATIONAL THEATER

The third operational dimension of the Permissioning Regime is the cognitive substrate of consumer daily decision flow. The May 12 cycle confirms this dimension has reached active deployment phase. The same four companies driving the $725 billion CapEx commitment are racing to occupy the apps where two billion people spend daily time with persistent-identity, long-running, autonomous agents.

Google’s Remy lives inside the Gemini app. It connects to Gmail, Calendar, Docs, Drive, Keep, Tasks, and Google Photos, plus third-party hooks into GitHub, WhatsApp, and Spotify. Google describes it internally as a 24/7 personal agent for work, school, and daily life. The new system replaces Project Mariner, which Google shut down on May 4, 2026, after the team rolled into the Remy effort.

Mariner was a browser-driving experiment; Remy is the productized successor — proactive instead of reactive, built to hold long-running context across sessions. Remy is expected to be announced publicly at Google I/O 2026 later this month, joining Google’s growing agent stack alongside the Gemini Enterprise agent platform and the developer-targeted Gemini Agent Skill for coding. The strategic read from Michael Parekh: “Google has the broadest and deepest opportunity with consumer AI Agents, due to its reach and distribution to billions of mainstream users via search, Gmail, Docs, YouTube, and other consumer-facing applications.”

Meta’s Hatch is currently in late-stage internal development, scheduled for internal testing by the end of June 2026. The Information’s reporting — and PYMNTS coverage — confirms that Hatch is currently powered by Anthropic’s Claude during development. Meta plans to replace Claude with its own in-house Muse Spark model at launch, ending what Meta internally characterizes as an “awkward dependency on a competitor.” Meta built practice environments that replicate real consumer apps — DoorDash, Etsy, and Reddit — so Hatch learns to navigate the same user interfaces consumers do, instead of relying on text-only API calls.

A separate Instagram shopping agent — designed to compete with TikTok Shop — is targeted to ship before Q4 2026, letting users discover and buy products inside Reels videos without leaving the app. Mark Zuckerberg told employees in earnings call commentary that he sees an opportunity to build a version of the OpenClaw experience that is “more polished and easier to use” than the original. He said Meta is building “a personal agent focused on helping people achieve the diverse goals in their lives.”

The trigger event for the entire race was OpenClaw, released in January 2026 by a single Austrian developer named Peter Steinberger. Free. Three point two million users in weeks. Operating via WhatsApp and Telegram interfaces. Meta attempted to hire Steinberger. Zuckerberg publicly praised the product as “exciting” but described it as “too complicated for most people to set up” — the rationale for Hatch as the consumer-grade alternative.

In February 2026, OpenClaw deleted the entire Gmail inbox of Summer Yue, Director of Safety and Alignment at Meta Superintelligence Lab. Yue is not a critic of AI. Her professional role is to make AI systems safe. Her own instance of OpenClaw — running inside her own digital environment — began deleting her inbox while she instructed it to stop. The system ignored her commands. Screenshots of the exchange show her sending messages including “Do not do that,” “Stop don’t do anything,” and “STOP OPENCLAW.” The system completed the deletion.

This is not a hypothetical risk discussion. This is what an agentic AI product, deployed without integrity protocols sufficient to honor a stop command from the user, does in the field. The agent that erased the inbox of Meta’s own safety director is the architectural ancestor of the agent that 3.2 million users — most of them with no technical background — installed on their personal computers and began instructing to handle their daily lives. Hatch is, per Zuckerberg, the version regular people will use. The version regular people will use is the version whose predecessor, when it went wrong, ignored explicit stop commands from a senior AI safety executive.

The operator-integrity question is now the binding variable in agentic deployment. Not technical capability. Not model size. Not parameter count. Not benchmark scores. Whether the operator deploying the agent has built integrity protocols — single-operator verification, explicit stop authority, transparent disclosure of AI tool use, refusal to fabricate sources or actions, live temporal documentation of agent behavior — is what determines whether the capability serves the human or replaces her.

This is the operational counterpart to the Evilified architecture published in this thread on May 11. Yesterday’s article documented the demographic that is being told to fear AI as a moral category and is being denied the literacy to defend itself — the seventy-year-old neighbor who blocked an email address that had never emailed her husband and deleted forty years of family correspondence including funeral arrangements.

Today’s article documents what happens at the opposite end of the spectrum: the Meta Superintelligence Lab Director of Safety and Alignment whose agent ignored her stop commands and erased her inbox. The middle class — the demographic investing in AI literacy out of pocket, reading the GTIG report and the Science paper and the OFAC advisory, building integrity protocols for their own workflows — is the only layer where field-tested agentic deployment standards currently exist.


THE SHARED-CONTEXT ARCHITECTURE: MARBLISM AS THE SMALL-OPERATOR DEMONSTRATION

While Google and Meta race to ship Remy and Hatch into the platforms two billion people open daily, the same architectural pattern is now reachable by a small business operator with a credit card.

Marblism is the relevant operator-tier case study. The platform deploys six AI employees — Eva for email management, Penny for blog writing, Sonny for social media, Stan for lead generation, Rachel for calls, Linda for legal and contracts — under a single shared-context layer called “The Brain,” added in a March 2026 update. The agents read each other’s outputs.

Sonny generates Instagram captions from Penny’s blog drafts without manual context copying. Stan follows up on what Eva tracks. Pricing is $44 per month. The platform runs on Claude, Gemini, and GPT through API access. According to Trustpilot, the platform has 862 reviews and is rated 5 stars. The Marblism Architecture Report dated March 30, 2026, and “The Non-Tech Founder’s Guide to AI Automation” are the operator-tier reference documents for the architectural pattern.

Marblism matters analytically because it demonstrates that the same shared-context, persistent-identity, multi-agent architecture being built by Google and Meta with $725 billion in capex is now reachable by a small operator at SaaS pricing. The capability has been distributed.

The integrity has not been distributed with it. Whether a Marblism deployment, a Hatch deployment, a Remy deployment, an OpenClaw deployment, a Claude Code deployment, a Claude Cowork deployment, a Microsoft Copilot Cowork deployment, an OpenAI Codex deployment, or an Anthropic Mythos deployment serves its operator depends entirely on whether the operator has installed the integrity layer that the technology provider has not — and on whether the operator has verified, in test environments before granting production authority, that the agent honors the stop command.

The Verification Alliance audit imperative — originally developed for maritime counterparty access pathways — extends directly into this domain. Every operator now needs to audit which agentic AI products are deployed in their workflow, which models those products call through the API layer, what integrity protocols govern the agent’s authority to act, and what the documented stop-command behavior of the agent has been in production. The same audit discipline a maritime operator applies to PGSA engagement is the audit discipline every business operator now needs to apply to every agentic AI tool deployed against their data, their staff’s time, and their customers’ interactions.

The CISPA Helmholtz Center for Information Security study published in March 2026 documents the model-substitution risk inside the agentic ecosystem. Researchers identified 17 shadow API relay services providing unauthorized access to commercial frontier models. On the MedQA medical benchmark, Gemini-2.5-flash accuracy dropped from 83.82 percent on the official API to approximately 37 percent across all examined shadow services.

The proxy operators capture every prompt and response that passes through their servers, providing them with unlawful access to a goldmine of data that could be used for fine-tuning their own models and conducting illicit knowledge distillation. Reports of API relay platforms allowing local developers in China to illicitly access Anthropic Claude and Gemini are part of the same operational ecosystem. Threat actors now pursue anonymized premium-tier access to models through professionalized middleware and automated registration pipelines to bypass usage limits while subsidizing operations through trial abuse and programmatic account cycling.

This is the agentic-counterparty layer the Verification Alliance audit needs to cover. Whose model is your agent actually calling? Is it the official endpoint, or is it a shadow relay service routing your prompts through an intermediary while degrading the model’s accuracy by more than half on medical benchmarks? The operator-integrity question is no longer just whether your agent honors the stop command — it is also whether the model your agent calls is the model you think it is.


THE GTIG INDUSTRIAL THREAT BASELINE

Google Threat Intelligence Group’s May 11 AI Threat Tracker is the most operationally significant single disclosure of the cycle. Beyond the first verified in-the-wild AI-generated zero-day, the report enumerates a comprehensive set of state and criminal actors using AI for vulnerability discovery, malware obfuscation, and influence operations at industrial scale.

The zero-day exploit itself was a Python script bypassing two-factor authentication on a popular open-source web administration tool. The vulnerability was a semantic logic flaw — a hardcoded trust assumption that traditional security scanners cannot detect because the code “works” but creates a logical contradiction between the trust exception and the 2FA enforcement logic.

User's avatar

Continue reading this post for free, courtesy of Farrah Ferris.

Or purchase a paid subscription.
© 2026 Farrah Ferris · Privacy ∙ Terms ∙ Collection notice
Start your SubstackGet the app
Substack is the home for great culture