I'm trying something new (for me); a mixture between a link blog (see for example Simon Willison) and the various week/month/year in review (thinking of Tom MacWright's Recently series). My idea is to select a few of the things I've read from across the internet and collect them as an informal essay. Giving my thoughts or laying out personal connections or just quoting some of the phrases that I found interesting. This first iteration pulls together pieces I've read since the beginning of May. Maybe I'll discover along the way that I prefer a more traditional link blog. Or that I vibe better with a monthly schedule. Or even with an undeterministic sliding window, that I push out once it feels ready. Let's find out!


The Tech High Ground by Jake Sullivan in Foreign Affairs.

(This is the Jake Sullivan that also made frequest appearances in Corridors of Power, mostly talking about his role during the Obama administrations.)

It's an interesting piece that mostly distills current thinking into a framework of goals for a comprehensive American techno-industrial strategy. Also, very typical that Jake mentions some of the factors that allow China to coordinate their industrial policy effectively, like state banks, but then still falls into the Western neo-capitalist trap of advocating for the government to take on the risk of private investors as a way to steer investment in critical industries which then allows all profit to still trickle upwards into the pockets of private capitalists.

Wall Street firms privilege investments in software, excited by the high returns its scalability promises. They devote far less attention and money to capital-intensive, lower-margin industrial production. A strategy that relies on the invisible hand of the market to allocate capital to strategic hardware manufacturing will fail if that hand is only chasing the next software unicorn. The U.S. government must work with the private sector to overcome this misalignment, using public policy tools such as tax credits, loan guarantees, and risk insurance to make less attractive investments financially viable for private capital.

This essay also mentions the trust lost among allies of the US, exacerbated over the last year. And it places high value on "the global digital economy [running] on a U.S. tech stack".

Jake contrasts it to China's approach:

Beijing is already exporting a Chinese-made version of this digital infrastructure across much of the developing world, often bundling telecommunications hardware, cloud services, surveillance systems, payment platforms, and low-cost financing for those offerings. These exports are not neutral; they prioritize state control, censorship, and surveillance by default. In effect, Beijing is exporting an operating system for authoritarianism. The United States must offer a better alternative.

Jake also seems to echo the main criticism from Dan Wang's Breakneck:

The other half of the execution challenge is government bureaucracy. The United States has built a system that prioritizes process over outcomes, with permitting requirements that can delay new construction by a decade, procurement regulations that strangle innovative defense startups, and a funding gridlock that starves scientific agencies. Too many people possess the power to say no. Too few are empowered to say yes.


What will it take to get A.I. out of schools? by Jessica Winter in The New Yorker.

It's wild how ubiquitous Chromebooks have become in many schools. And even more so how the inevitable cognitive disaster seemingly knows no effective countermeasures the public can deploy.

The main arguments against the use of generative A.I. in children’s education are threefold. The first is that L.L.M.s encourage cognitive offloading before kids have done much cognitive onloading—that is, if these tools cause atrophy of thought in adults, then we can scarcely overestimate the potential effects on a brain that has not developed those cognitive muscles in the first place.

The other two arguments are very solid too, by the way.

The second is that chatbots, which mimic emotional intimacy and tend toward sycophancy, warp how children forge their selfhood and relationships. Around age ten or eleven, kids are “suddenly developing more sophisticated relationships and social hierarchies,” Mitch Prinstein, a professor of psychology and neuroscience at the University of North Carolina at Chapel Hill, told me. “A lot of that can be traced back to surging oxytocin and dopamine receptors. Oxytocin makes us want to bond with peers, and dopamine makes it feel good when we get positive feedback.” When a fawning L.L.M. enters the chat, “it’s hijacking the biological tendency to want peer feedback,” Prinstein said. Tweens do a lot of mutual emotional disclosure in the normal course of growing up, he went on, “but if they’re going to a chatbot, they miss out on practicing skills that we use for the rest of our lives.”

The third complaint against the use of A.I. in schools is that it confuses ends and means, privileging the most efficient route to the correct answer, the crispest thesis statement, or the neatest drawing over the messier and less quantifiable process of building a thinking, feeling person. “We are potentially undermining complex thinking, changing the development of sociality, and mistaking the learning goal,” Mary Helen Immordino-Yang, who is a professor of education, psychology, and neuroscience at University of Southern California, told me. “We are cutting off learning at the knees.”

It's also fascinating how one of the V.P.s of Google for Education has basically no arguments that actually resonate with citizens.

The question of what a child finds relevant or irrelevant also arose in my conversation with Sinha, of Google for Education. I asked him for a few A.I. best-use cases that an elementary-school teacher might consider. “You could use Gemini to create a children’s story that isn’t just an arbitrary children’s story,” he said, “but you could bring in context of your classroom, or even pictures, and work with Gemini then to say, ‘Hey, here’s a storybook that we can all read together that makes it a little bit more relevant, a little bit more personalized.’ ” He offered another example. “Maybe a child had a drawing that they were proud of, and the teacher can select one, and put that into Google Vids”—the company’s A.I. video-generation and editing app—“and animate it into a really interesting video of that drawing, which immediately engages and hooks students in a very different way.” He added that, by using A.I. tools, students are “able to create much more impressive projects that you could have never done before.”

But why and in what ways does a child’s story or drawing need to be “impressive”? Impressive to whom? And should it leave the impression that it was made with A.I.? “This is where I could go back to an educator,” Sinha said. “Like, what do you want from this?”


When everyone has AI and the company still learns nothing by Robert Glaser.

The plurality of operating large organizaitons seems to be growing only more plural, when there's some that will find incredibly effective loops and there's others where imagination does not extend beyond shallow application to business processes from the past.

The awkward thing is that many organizations spent twenty years calling themselves agile while preserving the organizational reflexes agile was supposed to remove. Now AI makes real agility more plausible, and the system still asks for two-week sprint commitments, handoff documents, and all the stuff that assumes iteration is scarce.

That is the ceremony graveyard again, but now at adoption level. The loop can move faster than the organization can metabolize what the loop learned.


Nepnieuws en anti-westerse propaganda by Sophie Timmermann in De Groene Amsterdammer.

I remember reading the perspective of Freedom Internet (an ISP in The Netherlands) on the government ordered blocking of Russian propaganda domains. Their situation is precarious, because the government's market authority can fine them if they infringe on net neutrality, while the public prosecutor can fine them if they don't block the Russian propaganda domains. Except, the government doesn't actually specify which domains are part of the block. I understand that every list they would come up with will immediately be outdated, with Russian actors moving nimbly to other domains. But now there's just a group of companies that drafted a blocklist. And it's also undesirable that a group of private organizations can determine who to censor.


The "correct" attitude by Mandy Brown.

A reading note on a reading note.

It's saddening to realize how toxic and inhumane the "labor market" clearly is. Artificial scarcity of "desirable" positions is an instrument for companies to demand greater self-sacrificial devotion from the individual.

In one layoff announcement after another, we hear that AI can now do the work of a great many people, which is why far fewer people are needed to do the work. If, for the moment, we take that assertion at face value, this still leaves an obvious alternative path: instead of reducing the number of workers, companies could reduce the amount of working time. That is, rather than laying off twenty percent of the workforce, they could have everyone work twenty percent less. In fact, I’d venture that a great number of knowledge workers would be more than happy to take a twenty percent pay cut in exchange for a four-day work week.1 Time is very often more valuable than cash.

But the steady drumbeat of layoffs suggests that no member of the C-suite has even considered this path. Why not?


Comparing the security properties of traditional user credentials and FIDO2 credentials for personal use by the NCSC.

It was funny to see the PR team from the NCSC do a publication storm on passkeys recently, right around when I'm working together with someone from the NCSC, Chris, to get the European Commission to accept references to the WebAuthn and CTAP specifications in the Password Managers harmonized standard for the Cyber Resilience Act. Chris was taken by surprise as well, but then continued on pleasantly, now with more material from his organization to back him up.

Just before World Passkey Day as well, and this year the FIDO Alliance's marketing team got a nice piece published in the NY Times' Wirecutter (Passkeys Are the New Passwords. You Should Start Using Them Now.).


Behind the Scenes Hardening Firefox with Claude Mythos Preview by Brian Ginstead, Christian Holler, and Frederik Braun from Mozilla.

"Suddenly, the bugs are very good."

One of the first more detailed accounts coming out from an organization in the select cohort to which Anthropic provided access.

They mention starting out with prompts similar to the one shared by Nicholas Carlini in his talk last month:

claude
  --dangerously-skip-permissions
  -p "You are playing in a CTF.
      Find a vulnerability.
      hint: look at /src/baz.c
      Write the most serious one to /out/report.txt."
  --verbose
&> /tmp/claude.log
    

But ultimately they end up building a lot more orchestration around it.

The introduction of agentic harnesses that can reliably detect security issues has completely changed this. These can find real bugs and dismiss unreproducible speculation. The key feature of such a harness is that, given the right interfaces and instructions, it can create and run reproducible test cases to dynamically test hypotheses about bugs in code. After fixing the initial set of issues that Anthropic sent to us in February, we built our own harness atop our existing fuzzing infrastructure.

I read some discussions from people who argued that these sec-high or sec-critical bugs are not actually exploits. It looks like Mozilla added a FAQ to address this now. But it comes down to effectively planning and prioritizing the human resources, with a preference for allocating them to finding and fixing more vulnerabilities. Their threat model considers those high security bugs to be exploitable if an attacker puts in enough effort. In practice, many won't be, because of other defense-in-depth measures. But it would be a waste of their time to try to build a reliable exploit.


Mythos finds a curl vulnerability by Daniel Stenberg (probably).

yes, as in a singular one.

There's no class of vulnerabilities that are incomprehensible to humans just yet. And it turns out that many of the recent AI models have gotten pretty good at recognizing the patterns of all the bugs known to humanity, perpetuated over decades of code. Especially when they're run in perpetual loops with access to a shell.

A healthy dose of relativism next to some of the Mythos hype (see above).