Monday Brief for 26 April 2021
An endless frontier; A huge chip deal is on hold; and the EU takes action on AI
Heads Up: New Publication
My colleague Bill Drexel and I have have a new report, Quantum Computing: A National Security Primer. This paper discusses the basics of the technology, its key national security implications, and the current state of the global race for strategic quantum advantage.
Quantum computing has vast potential in a broad range of fields, including national security.
The United States faces crucial security vulnerabilities if one of its adversaries achieves quantum computing superiority before American defenses are sufficiently updated.
In recent years, governments around the world have pledged more than $20 billion toward quantum development, with China leading in public funding by a decisive margin.
Building on its recently established National Quantum Initiative, the US must safeguard its national security interests in quantum computing through enhanced risk awareness, strategic international cooperation, and accelerated network securitization.
Need a one minute primer on quantum science? Here’s a video we pulled together for you!
An Endless Frontier
What’s New: A bipartisan group of lawmakers has introduced the “Endless Frontier Act,” legislation that would increase investments in science and technology innovation in order to strengthen American economic strength and national security.
Why This Matters: This is a very large bill and Senate Majority Leader Chuck Schumer says it will be one of the Senate’s next legislative priorities.
The law expands the National Science Foundation by creating a Technology and Innovation Directorate with $100 billion to invest in “basic and advanced research, commercialization, and education and training programs in technology areas critical to national leadership.”
The Act also provides $10 billion for establishing “technology hubs” around the US, $2.4 billion for American “manufacturing and competitiveness,” and it sets up a Supply Chain Resiliency and Crisis Response Program.
The Endless Frontier Act was introduced by the bipartisan coalition of Senate Majority Leader Chuck Schumer (D-NY), Sen. Todd Young (R-IN), and Reps. Ro Khanna (D-CA) and Mike Gallagher (R-WI).
The bill is also co-sponsored by Sens. Maggie Hassan (D-NH), Susan Collins (R-ME), Chris Coons (D-DE), Rob Portman (R-OH), Tammy Baldwin (D-WI), Lindsey Graham (R-SC), Gary Peters (D-MI), Roy Blunt (R-MO), Steve Daines (R-MT), Chris Van Hollen (D-MD), Mitt Romney (R-UT) and Mark Kelly (D-AZ), as well as Reps. Susan Wild (D-PA), Mike Turner (R-OH), Jamaal Bowman (D-NY), Brian Fitzpatrick (R-PA) and Mikie Sherrill (D-NJ).
What I’m Thinking:
This isn’t the bill I would have written. There’s lots to poke at — nebulous expectations, minimal accountability, huge sums of money with no clear guarantee of success, and a whole lot of benefits for groups that are, at best, peripherally engaged on these issues.
But, what’s the alternative? The truth is that American technological leadership is not inevitable. In fact, we now have compelling evidence that our chief geopolitical rival, China, may exceed us in some capabilities (quantum computing), is a true-peer competitor in other technologies (telecommunications and offensive cyber), and is quickly gaining on us in still more innovations (AI and automation). If you believe that these and other emerging technologies will be a decisive variable in future economic and national security capacity, then it’s hard to argue against taking significant action — even if it’s a long-ways-away from being an ideal action.
Trade-offs are real. The idea that there are no free lunches is an inherently “conservative” idea — policymaking in the real world always requires tradeoffs. Resources spent “here” mean they cannot be spent “there.” This applies to the Endless Frontier Act too. But, while we’ve been waiting for ideal solutions in the innovation race with China, Beijing has been making up serious ground — largely financed by US consumption — and we now find ourselves facing the very serious possibility of falling behind. There are better ways to deal with these challenges than the Endless Frontier Act, but none of those are on offer. This bill can actually get passed and, even with all of its shortcomings, it appears likely to significantly advance basic R&D in the US and I’m convinced that is essential for our long-term thriving.
UK Pauses Largest Chip Deal in History
What’s New: The UK government is voicing national security concerns over a proposed sale of British microchip maker ARM to the US’s Nvidia, according to CNET.
Why This Matters: While not as famous as other big-name chipmakers, ARM’s processors are the beating heart of tons of mobile phones around the world and are growing in influence within AI and autonomous vehicle applications.
Japan’s SoftBank agreed last year to sell ARM to Nvidia as part of a $40 billion deal — the largest semiconductor deal in history.
Last week, UK Digital Secretary Oliver Dowden announced the country’s Competition and Markets Authority would review the sale, specifically to determine if there are any negative national security implications for Britain.
"Following careful consideration of the proposed takeover of ARM, I have today issued an intervention notice on national security grounds," said Dowden in a statement. "We want to support our thriving UK tech industry and welcome foreign investment, but it is appropriate that we properly consider the national security implications of a transaction like this."
SoftBank originally purchased ARM to improve its internet of things (IoT) capabilities and, if the deal closes, the Japanese tech giant will get an ownership stake in Nvidia.
Nvidia, on the other hand, wants ARM to boost its AI offerings.
Now the the deal will go through a “phase one” investigation,” where it will be more closely scrutinized. If no concerns are validated, the hold will be lifted; but, if concerns persist, the agreement could be blocked in a “phase two” proceeding.
What I’m Thinking: Some readers may wonder why our cousins across the pond would have concerns about an American company acquiring a British company. But this is likely less about espionage fears and more about semiconductors as a strategic, national resource. It’s rational for UK leaders to not want this sale to decisively harm their domestic ability to design cutting-edge semiconductors. As we’ve discussed ad nauseam in this newsletter, chipsets are critical for a nation’s future economic and national security and I’m frankly encouraged to see London thinking this way. Going forward, we’re going to need allies like the UK to be thinking more carefully about securing and expanding their domestic technological base — and these kinds of decisions are what that improved thinking looks like.
The EU Moves to Regulate AI
What’s New: The European Union (EU) issued draft rules last week that would govern how governments and companies use AI.
Why This Matters: This is a first-of-its-kind policy that could influence how the US chooses to engage one of the most important, and complicated, technological advancements in human history.
The proposed regulations are outlined in a 108-page document that touches on everything from education to banking, law enforcement, and other “high risk” areas that impact “fundamental rights.”
For example, the rules completely ban the use of facial recognition technologies in public spaces (though there are some national security exemptions).
The proposal also requires AI companies to demonstrate that their products and services are safe and that they can explain the decisions/choices made by their algorithms.
“On artificial intelligence, trust is a must, not a nice-to-have,” Margrethe Vestager, the European Commission executive vice president who oversees digital policy for the 27-nation bloc, said in a statement. “With these landmark rules, the E.U. is spearheading the development of new global norms to make sure A.I. can be trusted.”
What I’m Thinking:
First, I’m generally skeptical of the EU’s tendency to regulate first and ask questions later.
Second, I have to say, however, that many of these changes aren’t as drastic as some expected them to be. In fact, regulators are getting at least a few “atta-boys” from industry while some advocacy groups are saying the rules don’t go far enough.
Third, the societal challenges that are in view here are real. Things like ubiquitous facial recognition are going to have a huge impact on popular notions of privacy, autonomy, and “fairness.” Now, I don't for a second believe that the EU will solve any of these challenges with these regulations — they may even make them worse — but engaging these issues falls well within my understanding of good governance, and so I’ll be interested to see how this develops.
Finally, fourth, the American approach to such things has always been fundamentally different than that of Europe. Historically, our national posture has prioritized individual freedom and liberty over any desires for government-provided “safety.” This has, in part, led to a dynamic, agile, and largely prosperous economic, social, and political atmosphere that is still the envy of the world. On occasion, however, we have decided that government regulation — such as the safety standards for commercial airliners — are a good idea and these have been very successful.It’s arguable that technologies like AI could have a sufficiently significant impact across virtually all aspects of life that they too justify government regulation. These advancements are so new, though, that it is still too difficult to predict their implications, let alone take informed actions to mitigate their risks. This is why I prefer sector-specific, incremental rules that address particular harms rather than trying to solve all problems all at once as the EU is so fond of trying to do.
Biden Administration rolls out its 100-day plan to secure the grid.
Russia says it’ll build its own space station after leaving the ISS.
Wired magazine talks brains, hardware, and privacy with Facebook’s head of augmented reality.
That’s it for this Monday Brief. Thanks for reading, and if you think someone else would like this newsletter, please share it with your friends and followers. Have a great week!
For example, if you flew everyday of your life, probability suggests you’d have to fly 19,000 years before you’d be killed in a fatal plan accident.