Friday Brief for 5 November 2021
NSO Gets Booted; AI Countermeasures; and Gamifying War
Meta Data — Data providing information about one or more aspects of other data; it is used to summarize basic information about data which can make tracking and working with specific data easier.
Time to Go, NSO
What’s New: The Department of Commerce has added four foreign companies, including the Israeli spyware company, NSO Group, to the Entity List, according to a government press release.
Why This Matters: Commerce says these actions are part of a broader effort to make human rights more central to U.S. foreign policy.
On Wednesday, Commerce’s Bureau of Industry and Security (BIS) said NSO Group and Candiru (Israel) were added to the list for supplying foreign governments with spyware that is used to target “government officials, journalists, businesspeople, activists, academics, and embassy workers.”
Positive Technologies (Russia) and Computer Security Initiative Consultancy (Singapore) were also listed for trafficking in “tools used to gain unauthorized access to information systems, threatening the privacy and security of individuals and organizations worldwide.”
All of these companies are known for nefarious activities, but NSO Group made waves this summer when it was discovered that its employees — including several former U.S. employees of the National Security Agency (NSA) — helped governments in Saudi Arabia, Dubai in the U.A.E., Mexico, and others to spy on and to track private citizens and even journalists. (Read my previous commentary here.)
“The Entity List is a tool utilized by BIS to restrict the export, reexport, and in-country transfer of items subject to the EAR to persons (individuals, organizations, companies) reasonably believed to be involved, have been involved, or pose a significant risk of being or becoming involved, in activities contrary to the national security or foreign policy interests of the United States,” according to BIS. “For the four entities added to the Entity List in this final rule, BIS imposes a license requirement that applies to all items subject to the EAR (Export Administration Regulation). In addition, no license exceptions are available for exports, reexports, or transfers (in-country) to the entities being added to the Entity List in this rule. BIS imposes a license review policy of a presumption of denial for these entities.”
What I’m Thinking:
NSO should have followed its principles, not the money. NSO founder Shalev Hulio once said, “We built this company to save life. Period. I think there is not enough education about what a national security or intelligence organization needs to do every day in order to give, you know, basic security to their citizens.” Certainly this is true; but, when you offer services like these you are duty-bound to ensure they are not abused. That’s a tall order and a difficult task, but it is also the only way to navigate this “gray zone” where data and money flow to anyone willing to step over a few ethical boundaries.
This will cause some tension between Washington and Jerusalem. NSO is the pride and joy of Israel and exports of the company’s capabilities is regulated by the nation’s Ministry of Defense. This being the case, the BIS ruling implies that Israel’s government was insufficiently monitoring NSO or it was complicit in the company’s violations. In either case, feelings are likely to be hurt. (Note: Israel’s relations have significantly improved with several Gulf nations over the last several years, including the U.A.E. and Saudi Arabia, and this almost certainly is the context for NSO’s business with these nations.)
“The List” is becoming a tool of choice. In 2019, the Trump administration issued an Executive Order on Securing Information and Communications Technology and Services Supply Chain. Among other actions, this order expanded the Secretary of Commerce’s authority to review tech-related contracts and activities, and to stop these contracts and activities, if they are deemed a threat to national security. This led the way for companies like China’s Huawei to be “listed” and excluded from the U.S. market, losing billions of dollars in revenue. The Biden administration has followed this model and, in July, added 34 entities to the list at one time — including 14 based in China. “Listing” bad guy companies is a type of “whack-a-mole” strategy where you take known threats off of the board one at a time. While it is not a comprehensive strategy for reducing risk, it does give the government an agile solution against the most pressing threats and it is likely to feature prominently in American national security policy for the foreseeable future.
AI Denial & Deception
What’s New: I’ve recently come across two pieces of AI research that have sent my mind spinning on what I’m calling “AI denial and deception.”
Why This Matters: AI will introduce novel benefits and challenges as it is integrated into warcraft.
Some of the most significant advancements in AI in the last two decades have been in the sub-discipline of computer vision (CV).
CV is generally broken into six parts: image capture and processing, object detection and image segmentation, object recognition, object tracking, gesture and movement recognition, and scene understanding.
In civilian life, this tech is used for things like allowing us to unlock our phones, enabling automated vehicles to recognize traffic signs, and automatically identifying photos of your friends and family.
In national security, this tech is increasingly used to sift through huge stores of images to identify people, weapons, and other targets of interest at a scale and speed that human analysts could never achieve. This, then, brings me to the two research findings of interest.
First, back in 2016, smart people at Carnegie Mellon University printed special patterns onto a set of glasses that fooled AI into believing a white male was the actress Milla Jovovich and that a South-Asian woman was a Middle-Eastern male.
The second experiment, done in 2018 by China’s Tencent, used three stickers to confuse Tesla’s autonomous driving system and to deliberately swerve into oncoming traffic.
What I’m Thinking:
The cat and mouse game never ends. AI-enabled computer vision is a critical capability that, in some applications, far outstrips human capabilities. But, as our understanding of these capabilities advances, so does our ability to fool and manipulate them. Therefore, as CV is integrated into national security missions, it will simultaneously dramatically improve and degrade our “awareness.” Both of these experiments are several years old and we’ve no doubt come a long way in addressing these vulnerabilities. But, then again, so have the bad guys.
AI camouflage could be powerful. Here are just two examples of how this tech could be applied in the real world. First, imagine being able to paint military hardware so that it could not be “seen” by satellites or other overhead systems. What if, for example, a tank could be made to look like an ambulance? This would end up being a type of poor man’s stealth that would be available to virtually any military. Second, what if we could replicate the Tesla experiment, but with autonomous systems like drones? Could we use specially crafted images or other data that would, for example, cause a drone to crash itself? If so, we (or our enemies) could put these images on the tops of critical facilities as a kind of cheap “air defense system” that would be much cheaper and easier to deploy.
This is next-generation “denial and deception.” Deception has always been a part of war. During the 1980s, however, the U.S. formalized a theoretical framework for thinking about and assessing how militaries use secrecy and deception — this became known as “denial and deception” (D&D). The capabilities discussed here generally fall into this category. Other actions like data manipulation and “poisoning” (i.e., deliberately integrating false information or corrupted data into an enemy’s decision-making process) will also feature prominently in this next-generation of military D&D.
What’s New: Israel’s Elbit Systems is using AI to help soldiers feel like they’re in a first-person shooter video game.
Why This Matters: The Assault Rifle Combat Applications System (ARCAS) reportedly improves a soldier’s ability to conduct intelligence, surveillance, and reconnaissance, to communicate with their squad, and to automatically know and adjust to a target’s range.
What I’m Thinking:
The future inevitably looks like this. The capabilities illustrated by ARCAS are increasingly real and offer meaningful battlefield advantage. Things like automatic aperture and ballistic adjustments to target range and atmospheric data will be especially helpful and are more likely to be deployed in the near- to mid-term.
But, new tech brings new vulnerabilities. Think about our second story, what if enemy soldiers had uniforms that tricked the ARCAS AI into identifying them as “friendlies” or civilians? Bottom line: future war may get easier, but it will never be easy.
Also, more features = more problems and so deployment will be iterative. There’s a reason why our military still uses paper maps alongside all of our whiz-bang tech— because paper maps still work after being shot with a bullet. Furthermore, tech like ARCAS is just one more piece of equipment that needs a battery (yet another thing soldiers have to carry) and a data link (often hard to find in austere fighting environments). Ask most riflemen if they’d like to carry another 15-20 pounds so that their gun will count rounds for them and they’ll likely take a pass. Even so, the comprehensive advantage envisioned is worth pursuing but it will be realized in slow, deliberate phases.
Let’s Get Visual
“Sideloading is a cyber criminal’s best friend,” according to Apple’s software chief
That’s it for this Friday Brief. Thanks for reading, and if you think someone else would like this newsletter, please share it with your friends and followers. Have a great weekend!