A massive DDoS attack leaves ‘Among Us’ unplayable in North America and Europe

Since late Friday afternoon, Among Us developer Innersloth has been trying to contain a DDoS attack against both its North American and European servers, leaving the popular game unplayable for many. “Service will be offline while the team works on fixing it, but might take a bit, hang tight! Sorry!” Innersloth said on Friday in a tweet spotted by Eurogamer.

As of the writing of this article, Innersloth has managed to restore some servers, but the situation does not appear to be fully resolved with the game’s official Twitter account still stating “Among Us servers down” in its profile. “Can’t believe I’m working on a Saturday right now, I was supposed to go and get a croissant,” Innersloth said in one particularly desperate-sounding update over the weekend.

Thanks to its popularity, Among Us is no stranger to disruptive hacking attacks. In 2020, the game experienced a far-reaching spam attack that affected as many 5 million players after an individual named “Eris Loris” found a way to hack millions of games. The event led to no small amount of grief and frustration among the game’s community, with many taking to Reddit to vent their frustration at the hacker.

Hitting the Books: The Soviets once tasked an AI with our mutually assured destruction

Barely a month into its already floundering invasion of Ukraine and Russia is rattling its nuclear saber and threatening to drastically escalate the regional conflict into all out world war. But the Russians are no stranger to nuclear brinksmanship. In the excerpt below from Ben Buchanan and Andrew Imbrie’s latest book, we can see how closely humanity came to an atomic holocaust in 1983 and why an increasing reliance on automation — on both sides of the Iron Curtain — only served to heighten the likelihood of an accidental launch. The New Fire looks at the rapidly expanding roles of automated machine learning systems in national defense and how increasingly ubiquitous AI technologies (as examined through the thematic lenses of “data, algorithms, and computing power”) are transforming how nations wage war both domestically and abroad.

The New Fire Cover
MIT Press

Excerpted from The New Fire: War, Peacem, and Democracy in the Age of AI by Andrew Imbrie and Ben Buchanan. Published by MIT Press. Copyright © 2021 by Andrew Imbrie and Ben Buchanan. All rights reserved.


THE DEAD HAND

As the tensions between the United States and the Soviet Union reached their apex in the fall of 1983, the nuclear war began. At least, that was what the alarms said at the bunker in Moscow where Lieutenant Colonel Stanislav Petrov was on duty. 

Inside the bunker, sirens blared and a screen flashed the word “launch.”A missile was inbound. Petrov, unsure if it was an error, did not respond immediately. Then the system reported two more missiles, and then two more after that. The screen now said “missile strike.” The computer reported with its highest level of confidence that a nuclear attack was underway.

The technology had done its part, and everything was now in Petrov’s hands. To report such an attack meant the beginning of nuclear war, as the Soviet Union would surely launch its own missiles in retaliation. To not report such an attack was to impede the Soviet response, surrendering the precious few minutes the country’s leadership had to react before atomic mushroom clouds burst out across the country; “every second of procrastination took away valuable time,” Petrov later said. 

“For 15 seconds, we were in a state of shock,” he recounted. He felt like he was sitting on a hot frying pan. After quickly gathering as much information as he could from other stations, he estimated there was a 50-percent chance that an attack was under way. Soviet military protocol dictated that he base his decision off the computer readouts in front of him, the ones that said an attack was undeniable. After careful deliberation, Petrov called the duty officer to break the news: the early warning system was malfunctioning. There was no attack, he said. It was a roll of the atomic dice.

Twenty-three minutes after the alarms—the time it would have taken a missile to hit Moscow—he knew that he was right and the computers were wrong. “It was such a relief,” he said later. After-action reports revealed that the sun’s glare off a passing cloud had confused the satellite warning system. Thanks to Petrov’s decisions to disregard the machine and disobey protocol, humanity lived another day.

Petrov’s actions took extraordinary judgment and courage, and it was only by sheer luck that he was the one making the decisions that night. Most of his colleagues, Petrov believed, would have begun a war. He was the only one among the officers at that duty station who had a civilian, rather than military, education and who was prepared to show more independence. “My colleagues were all professional soldiers; they were taught to give and obey orders,” he said. The human in the loop — this particular human — had made all the difference.

Petrov’s story reveals three themes: the perceived need for speed in nuclear command and control to buy time for decision makers; the allure of automation as a means of achieving that speed; and the dangerous propensity of those automated systems to fail. These three themes have been at the core of managing the fear of a nuclear attack for decades and present new risks today as nuclear and non-nuclear command, control, and communications systems become entangled with one another. 

Perhaps nothing shows the perceived need for speed and the allure of automation as much as the fact that, within two years of Petrov’s actions, the Soviets deployed a new system to increase the role of machines in nuclear brinkmanship. It was properly known as Perimeter, but most people just called it the Dead Hand, a sign of the system’s diminished role for humans. As one former Soviet colonel and veteran of the Strategic Rocket Forces put it, “The Perimeter system is very, very nice. Were move unique responsibility from high politicians and the military.” The Soviets wanted the system to partly assuage their fears of nuclear attack by ensuring that, even if a surprise strike succeeded in decapitating the country’s leadership, the Dead Hand would make sure it did not go unpunished.

The idea was simple, if harrowing: in a crisis, the Dead Hand would monitor the environment for signs that a nuclear attack had taken place, such as seismic rumbles and radiation bursts. Programmed with a series of if-then commands, the system would run through the list of indicators, looking for evidence of the apocalypse. If signs pointed to yes, the system would test the communications channels with the Soviet General Staff. If those links were active, the system would remain dormant. If the system received no word from the General Staff, it would circumvent ordinary procedures for ordering an attack. The decision to launch would thenrest in the hands of a lowly bunker officer, someone many ranks below a senior commander like Petrov, who would nonetheless find himself responsible for deciding if it was doomsday.

The United States was also drawn to automated systems. Since the 1950s, its government had maintained a network of computers to fuse incoming data streams from radar sites. This vast network, called the Semi-Automatic Ground Environment, or SAGE, was not as automated as the Dead Hand in launching retaliatory strikes, but its creation was rooted in a similar fear. Defense planners designed SAGE to gather radar information about a potential Soviet air attack and relay that information to the North American Aerospace Defense Command, which would intercept the invading planes. The cost of SAGE was more than double that of the Manhattan Project, or almost $100 billion in 2022 dollars. Each of the twenty SAGE facilities boasted two 250-ton computers, which each measured 7,500 square feet and were among the most advanced machines of the era.

If nuclear war is like a game of chicken — two nations daring each other to turn away, like two drivers barreling toward a head-on collision — automation offers the prospect of a dangerous but effective strategy. As the nuclear theorist Herman Kahn described:

The “skillful” player may get into the car quite drunk, throwing whisky bottles out the window to make it clear to everybody just how drunk he is. He wears very dark glasses so that it is obvious that he cannot see much, if anything. As soon as the car reaches high speed, he takes the steering wheel and throws it out the window. If his opponent is watching, he has won. If his opponent is not watching, he has a problem; likewise, if both players try this strategy. 

To automate nuclear reprisal is to play chicken without brakes or a steering wheel. It tells the world that no nuclear attack will go unpunished, but it greatly increases the risk of catastrophic accidents.

Automation helped enable the dangerous but seemingly predictable world of mutually assured destruction. Neither the United States nor the Soviet Union was able to launch a disarming first strike against the other; it would have been impossible for one side to fire its nuclear weapons without alerting the other side and providing at least some time to react. Even if a surprise strike were possible, it would have been impractical to amass a large enough arsenal of nuclear weapons to fully disarm the adversary by firing multiple warheads at each enemy silo, submarine, and bomber capable of launching a counterattack. Hardest of all was knowing where to fire. Submarines in the ocean, mobile ground-launched systems on land, and round-the-clock combat air patrols in the skies made the prospect of successfully executing such a first strike deeply unrealistic. Automated command and control helped ensure these units would receive orders to strike back. Retaliation was inevitable, and that made tenuous stability possible. 

Modern technology threatens to upend mutually assured destruction. When an advanced missile called a hypersonic glide vehicle nears space, for example, it separates from its booster rockets and accelerates down toward its target at five times the speed of sound. Unlike a traditional ballistic missile, the vehicle can radically alter its flight profile over longranges, evading missile defenses. In addition, its low-altitude approach renders ground-based sensors ineffective, further compressing the amount of time for decision-making. Some military planners want to use machine learning to further improve the navigation and survivability of these missiles, rendering any future defense against them even more precarious. 

Other kinds of AI might upend nuclear stability by making more plausible a first strike that thwarts retaliation. Military planners fear that machine learning and related data collection technologies could find their hidden nuclear forces more easily. For example, better machine learning–driven analysis of overhead imagery could spot mobile missile units; the United States reportedly has developed a highly classified program to use AI to track North Korean launchers. Similarly, autonomous drones under the sea might detect enemy nuclear submarines, enabling them to be neutralized before they can retaliate for an attack. More advanced cyber operations might tamper with nuclear command and control systems or fool early warning mechanisms, causing confusion in the enemy’s networks and further inhibiting a response. Such fears of what AI can do make nuclear strategy harder and riskier. 

For some, just like the Cold War strategists who deployed the expert systems in SAGE and the Dead Hand, the answer to these new fears is more automation. The commander of Russia’s Strategic Rocket Forces has said that the original Dead Hand has been improved upon and is still functioning, though he didn’t offer technical details. In the United States, some proposals call for the development of a new Dead Hand–esque system to ensure that any first strike is met with nuclear reprisal,with the goal of deterring such a strike. It is a prospect that has strategic appeal to some warriors but raises grave concern for Cassandras, whowarn of the present frailties of machine learning decision-making, and for evangelists, who do not want AI mixed up in nuclear brinkmanship.

While the evangelists’ concerns are more abstract, the Cassandras have concrete reasons for worry. Their doubts are grounded in storieslike Petrov’s, in which systems were imbued with far too much trust and only a human who chose to disobey orders saved the day. The technical failures described in chapter 4 also feed their doubts. The operational risks of deploying fallible machine learning into complex environments like nuclear strategy are vast, and the successes of machine learning in other contexts do not always apply. Just because neural networks excel at playing Go or generating seemingly authentic videos or even determining how proteins fold does not mean that they are any more suited than Petrov’s Cold War–era computer for reliably detecting nuclear strikes.In the realm of nuclear strategy, misplaced trust of machines might be deadly for civilization; it is an obvious example of how the new fire’s force could quickly burn out of control. 

Of particular concern is the challenge of balancing between false negatives and false positives—between failing to alert when an attack is under way and falsely sounding the alarm when it is not. The two kinds of failure are in tension with each other. Some analysts contend that American military planners, operating from a place of relative security,worry more about the latter. In contrast, they argue that Chinese planners are more concerned about the limits of their early warning systems,given that China possesses a nuclear arsenal that lacks the speed, quantity, and precision of American weapons. As a result, Chinese government leaders worry chiefly about being too slow to detect an attack in progress. If these leaders decided to deploy AI to avoid false negatives,they might increase the risk of false positives, with devastating nuclear consequences. 

The strategic risks brought on by AI’s new role in nuclear strategy are even more worrying. The multifaceted nature of AI blurs lines between conventional deterrence and nuclear deterrence and warps the established consensus for maintaining stability. For example, the machine learning–enabled battle networks that warriors hope might manage conventional warfare might also manage nuclear command and control. In such a situation, a nation may attack another nation’s information systems with the hope of degrading its conventional capacity and inadvertently weaken its nuclear deterrent, causing unintended instability and fear and creating incentives for the victim to retaliate with nuclear weapons. This entanglement of conventional and nuclear command-and-control systems, as well as the sensor networks that feed them, increases the risks of escalation. AI-enabled systems may like-wise falsely interpret an attack on command-and-control infrastructure as a prelude to a nuclear strike. Indeed, there is already evidence that autonomous systems perceive escalation dynamics differently from human operators. 

Another concern, almost philosophical in its nature, is that nuclear war could become even more abstract than it already is, and hence more palatable. The concern is best illustrated by an idea from Roger Fisher, a World War II pilot turned arms control advocate and negotiations expert. During the Cold War, Fisher proposed that nuclear codes be stored in a capsule surgically embedded near the heart of a military officer who would always be near the president. The officer would also carry a large butcher knife. To launch a nuclear war, the president would have to use the knife to personally kill the officer and retrieve the capsule—a comparatively small but symbolic act of violence that would make the tens of millions of deaths to come more visceral and real. 

Fisher’s Pentagon friends objected to his proposal, with one saying,“My God, that’s terrible. Having to kill someone would distort the president’s judgment. He might never push the button.” This revulsion, ofcourse, was what Fisher wanted: that, in the moment of greatest urgency and fear, humanity would have one more chance to experience—at an emotional, even irrational, level—what was about to happen, and one more chance to turn back from the brink. 

Just as Petrov’s independence prompted him to choose a different course, Fisher’s proposed symbolic killing of an innocent was meant to force one final reconsideration. Automating nuclear command and control would do the opposite, reducing everything to error-prone, stone-coldmachine calculation. If the capsule with nuclear codes were embedded near the officer’s heart, if the neural network decided the moment was right, and if it could do so, it would—without hesitation and without understanding—plunge in the knife.

Uber secures 30-month London taxi license

Following a years-long dispute with the city’s transit regulator, Uber has earned a 30-month license to continue operating in London, Transport for London (TfL) said on Saturday. “Uber has been granted a London private hire vehicle operator’s license for a period of two and a half years,” a TfL spokesperson told CNBC.

Uber’s dispute with TfL dates back to 2017 when the agency said the company wasn’t “fit and proper” to operate in the city and went on to revoke its taxi license. Among other issues, TfL said Uber had failed to properly conduct driver background checks and report serious criminal offenses. Uber appealed that decision. And while a court went on to grant it 15 months to clean up its act, TfL eventually revoked the company’s license again in 2019, noting at the time it had shown a “pattern of failures” in the past. Subsequently, Uber won another court decision in 2020 that gave it a new 18-month license that came with conditions designed to monitor its adherence to local regulations.

On Twitter, Uber said it was “delighted” by TfL’s decision, noting the agency “rightly holds our industry to the highest regulatory and safety standards,” and that it was “pleased to have met their high bar.” But not everyone is happy about the decision.

“This is yet another tragically missed opportunity for [London Mayor] Sadiq Khan to make worker rights a condition of license for Uber to finally bring an end to the abuse of 100,000 gig workers licensed by Transport for London,” the App Drivers and Couriers Union said following the announcement. The group accused the company of failing to comply with a UK Supreme Court ruling from law year that said the company should treat its drivers as workers.

Apple’s latest AirPods are on sale for $150 right now

Apple’s third-generation AirPods may only be a few months old, but you can purchase them right now for 16 percent off their suggested retail price. Amazon has discounted the company’s latest earbuds to $149.98. That’s only $10 more than their all-time low of $140.

Buy AirPods (3rd gen) at Amazon – $150

While you could buy Apple’s second-generation AirPods for less money, we think the new model is a better purchase for most people. We gave Apple’s latest earbuds a score of 88, noting they were “better in nearly every way” from their predecessor. They feature a new design that we found a lot more comfortable. Sound quality is likewise improved with the third-generation AirPods capable of delivering rich bass. Battery life was another highlight, with the included charging case providing up to 30 hours of listening time. Apple’s H1 chip enables a handful of handy features, including hands-free Siri, support for spatial audio with head tracking and seamless pairing with Apple devices.

Of course, they’re not perfect. Their one-size-fits-all design won’t be for everyone, and they don’t come with active noise cancellation, a feature that would make them ideal for commuting. Still, if you own an iPhone, it’s hard to go wrong with the third-generation AirPods.

Follow @EngadgetDeals on Twitter for the latest tech deals and buying advice.

Recommended Reading: Telegram is playing with fire

Telegram’s dangerous game

Casey Newton, Platformer

Telegram was almost banned in Brazil because it missed some emails from the local authorities. In his newsletter, Newton explains why this is the latest in a series of troubling decisions from a platform with over 500 million users. “When you’re providing critical communications infrastructure to tens of millions of people, though, you have more responsibility,” he writes.

Here’s how an algorithm guides a medical decision

Nicole Wetsman, The Verge

Artificial intelligence is being used for all sorts of things in medicine, one of which is predicting if a patient is at risk for conditions like diabetes or cardiovascular issues. However, it can be difficult for us as members of the public to understand how these algorithms work. The Verge guides us through one called Sepsis Watch, a system that monitors patients for a potentially deadly condition following an infection.

Ukraine’s engineers battle to keep the internet running while Russian bombs fall around them

Thomas Brewster, Forbes

Elon Musk’s Starlink satellites are helping to provide internet access in Ukraine in the midst of the ongoing Russian invasion, but crews on the ground are venturing into dangerous areas to fix equipment that was already in place that’s being damaged by bombings.