自然豊かなダートコースに挑戦!アクロバティック走行を楽しむ『Airborne MX』:発掘!スマホゲーム

星の数ほどあるスマホゲームの中から、話題になっているもの、気になったもの、何か面白そう、そんなタイトルをひたすらご紹介していきます!今回ご紹介するのは『Airborne MX』です。

2016年3月28日、指紋センサー搭載のエントリースマホ「ZTE Blade V580」が発売されました:今日は何の日?

2万7800円(税別)という低価格ながら、5.5インチフルHD液晶を搭載したスマートフォンが「ZTE Blade V580」。

このエントリーモデルとなる製品が発売されたのが、2016年の今日です。

Apple and Google close loophole that allowed Russians to use Mir cards for mobile payments

Apple has closed a loophole that had allowed some Russians to continue using its mobile payments service despite the ongoing economic sanctions against Russia. According to Reuters, the company told the country’s largest lender on Thursday it would no longer support Russia’s homegrown Mir payments system through Apple Pay.

“Apple has informed NSPK it is suspending support for Mir cards in the Apple Pay payment service,” the National Card Payment System said Friday. “Starting from March 24th, users cannot add new Mir cards to the service. Apple will stop all operations of previously added cards over the next few days.”

Google took similar action last week as well. According to a separate report from The Wall Street Journal, the company paused a pilot that had allowed Russians to connect their Mir cards to Google Pay. “Google Pay is pausing payments-related services in Russia as a result of payment services disruption out of our control,” a Google spokesperson told the outlet.

As The Verge notes, the Central Bank of Russia established Mir after the US and other countries imposed sanctions on Russia in response to its annexation of Crimea in 2014. According to statistics shared by the Central Bank, Mir cards are involved in more than 25 percent of all card transactions within the country. Previously, cards from major Russian financial institutions like VTB Group and Sovcombank stopped working with Apple Pay and Google Pay shortly after the Kremlin launched its invasion of Ukraine on February 24th.

A massive DDoS attack leaves ‘Among Us’ unplayable in North America and Europe

Since late Friday afternoon, Among Us developer Innersloth has been trying to contain a DDoS attack against both its North American and European servers, leaving the popular game unplayable for many. “Service will be offline while the team works on fixing it, but might take a bit, hang tight! Sorry!” Innersloth said on Friday in a tweet spotted by Eurogamer.

As of the writing of this article, Innersloth has managed to restore some servers, but the situation does not appear to be fully resolved with the game’s official Twitter account still stating “Among Us servers down” in its profile. “Can’t believe I’m working on a Saturday right now, I was supposed to go and get a croissant,” Innersloth said in one particularly desperate-sounding update over the weekend.

Thanks to its popularity, Among Us is no stranger to disruptive hacking attacks. In 2020, the game experienced a far-reaching spam attack that affected as many 5 million players after an individual named “Eris Loris” found a way to hack millions of games. The event led to no small amount of grief and frustration among the game’s community, with many taking to Reddit to vent their frustration at the hacker.

Hitting the Books: The Soviets once tasked an AI with our mutually assured destruction

Barely a month into its already floundering invasion of Ukraine and Russia is rattling its nuclear saber and threatening to drastically escalate the regional conflict into all out world war. But the Russians are no stranger to nuclear brinksmanship. In the excerpt below from Ben Buchanan and Andrew Imbrie’s latest book, we can see how closely humanity came to an atomic holocaust in 1983 and why an increasing reliance on automation — on both sides of the Iron Curtain — only served to heighten the likelihood of an accidental launch. The New Fire looks at the rapidly expanding roles of automated machine learning systems in national defense and how increasingly ubiquitous AI technologies (as examined through the thematic lenses of “data, algorithms, and computing power”) are transforming how nations wage war both domestically and abroad.

The New Fire Cover
MIT Press

Excerpted from The New Fire: War, Peacem, and Democracy in the Age of AI by Andrew Imbrie and Ben Buchanan. Published by MIT Press. Copyright © 2021 by Andrew Imbrie and Ben Buchanan. All rights reserved.


THE DEAD HAND

As the tensions between the United States and the Soviet Union reached their apex in the fall of 1983, the nuclear war began. At least, that was what the alarms said at the bunker in Moscow where Lieutenant Colonel Stanislav Petrov was on duty. 

Inside the bunker, sirens blared and a screen flashed the word “launch.”A missile was inbound. Petrov, unsure if it was an error, did not respond immediately. Then the system reported two more missiles, and then two more after that. The screen now said “missile strike.” The computer reported with its highest level of confidence that a nuclear attack was underway.

The technology had done its part, and everything was now in Petrov’s hands. To report such an attack meant the beginning of nuclear war, as the Soviet Union would surely launch its own missiles in retaliation. To not report such an attack was to impede the Soviet response, surrendering the precious few minutes the country’s leadership had to react before atomic mushroom clouds burst out across the country; “every second of procrastination took away valuable time,” Petrov later said. 

“For 15 seconds, we were in a state of shock,” he recounted. He felt like he was sitting on a hot frying pan. After quickly gathering as much information as he could from other stations, he estimated there was a 50-percent chance that an attack was under way. Soviet military protocol dictated that he base his decision off the computer readouts in front of him, the ones that said an attack was undeniable. After careful deliberation, Petrov called the duty officer to break the news: the early warning system was malfunctioning. There was no attack, he said. It was a roll of the atomic dice.

Twenty-three minutes after the alarms—the time it would have taken a missile to hit Moscow—he knew that he was right and the computers were wrong. “It was such a relief,” he said later. After-action reports revealed that the sun’s glare off a passing cloud had confused the satellite warning system. Thanks to Petrov’s decisions to disregard the machine and disobey protocol, humanity lived another day.

Petrov’s actions took extraordinary judgment and courage, and it was only by sheer luck that he was the one making the decisions that night. Most of his colleagues, Petrov believed, would have begun a war. He was the only one among the officers at that duty station who had a civilian, rather than military, education and who was prepared to show more independence. “My colleagues were all professional soldiers; they were taught to give and obey orders,” he said. The human in the loop — this particular human — had made all the difference.

Petrov’s story reveals three themes: the perceived need for speed in nuclear command and control to buy time for decision makers; the allure of automation as a means of achieving that speed; and the dangerous propensity of those automated systems to fail. These three themes have been at the core of managing the fear of a nuclear attack for decades and present new risks today as nuclear and non-nuclear command, control, and communications systems become entangled with one another. 

Perhaps nothing shows the perceived need for speed and the allure of automation as much as the fact that, within two years of Petrov’s actions, the Soviets deployed a new system to increase the role of machines in nuclear brinkmanship. It was properly known as Perimeter, but most people just called it the Dead Hand, a sign of the system’s diminished role for humans. As one former Soviet colonel and veteran of the Strategic Rocket Forces put it, “The Perimeter system is very, very nice. Were move unique responsibility from high politicians and the military.” The Soviets wanted the system to partly assuage their fears of nuclear attack by ensuring that, even if a surprise strike succeeded in decapitating the country’s leadership, the Dead Hand would make sure it did not go unpunished.

The idea was simple, if harrowing: in a crisis, the Dead Hand would monitor the environment for signs that a nuclear attack had taken place, such as seismic rumbles and radiation bursts. Programmed with a series of if-then commands, the system would run through the list of indicators, looking for evidence of the apocalypse. If signs pointed to yes, the system would test the communications channels with the Soviet General Staff. If those links were active, the system would remain dormant. If the system received no word from the General Staff, it would circumvent ordinary procedures for ordering an attack. The decision to launch would thenrest in the hands of a lowly bunker officer, someone many ranks below a senior commander like Petrov, who would nonetheless find himself responsible for deciding if it was doomsday.

The United States was also drawn to automated systems. Since the 1950s, its government had maintained a network of computers to fuse incoming data streams from radar sites. This vast network, called the Semi-Automatic Ground Environment, or SAGE, was not as automated as the Dead Hand in launching retaliatory strikes, but its creation was rooted in a similar fear. Defense planners designed SAGE to gather radar information about a potential Soviet air attack and relay that information to the North American Aerospace Defense Command, which would intercept the invading planes. The cost of SAGE was more than double that of the Manhattan Project, or almost $100 billion in 2022 dollars. Each of the twenty SAGE facilities boasted two 250-ton computers, which each measured 7,500 square feet and were among the most advanced machines of the era.

If nuclear war is like a game of chicken — two nations daring each other to turn away, like two drivers barreling toward a head-on collision — automation offers the prospect of a dangerous but effective strategy. As the nuclear theorist Herman Kahn described:

The “skillful” player may get into the car quite drunk, throwing whisky bottles out the window to make it clear to everybody just how drunk he is. He wears very dark glasses so that it is obvious that he cannot see much, if anything. As soon as the car reaches high speed, he takes the steering wheel and throws it out the window. If his opponent is watching, he has won. If his opponent is not watching, he has a problem; likewise, if both players try this strategy. 

To automate nuclear reprisal is to play chicken without brakes or a steering wheel. It tells the world that no nuclear attack will go unpunished, but it greatly increases the risk of catastrophic accidents.

Automation helped enable the dangerous but seemingly predictable world of mutually assured destruction. Neither the United States nor the Soviet Union was able to launch a disarming first strike against the other; it would have been impossible for one side to fire its nuclear weapons without alerting the other side and providing at least some time to react. Even if a surprise strike were possible, it would have been impractical to amass a large enough arsenal of nuclear weapons to fully disarm the adversary by firing multiple warheads at each enemy silo, submarine, and bomber capable of launching a counterattack. Hardest of all was knowing where to fire. Submarines in the ocean, mobile ground-launched systems on land, and round-the-clock combat air patrols in the skies made the prospect of successfully executing such a first strike deeply unrealistic. Automated command and control helped ensure these units would receive orders to strike back. Retaliation was inevitable, and that made tenuous stability possible. 

Modern technology threatens to upend mutually assured destruction. When an advanced missile called a hypersonic glide vehicle nears space, for example, it separates from its booster rockets and accelerates down toward its target at five times the speed of sound. Unlike a traditional ballistic missile, the vehicle can radically alter its flight profile over longranges, evading missile defenses. In addition, its low-altitude approach renders ground-based sensors ineffective, further compressing the amount of time for decision-making. Some military planners want to use machine learning to further improve the navigation and survivability of these missiles, rendering any future defense against them even more precarious. 

Other kinds of AI might upend nuclear stability by making more plausible a first strike that thwarts retaliation. Military planners fear that machine learning and related data collection technologies could find their hidden nuclear forces more easily. For example, better machine learning–driven analysis of overhead imagery could spot mobile missile units; the United States reportedly has developed a highly classified program to use AI to track North Korean launchers. Similarly, autonomous drones under the sea might detect enemy nuclear submarines, enabling them to be neutralized before they can retaliate for an attack. More advanced cyber operations might tamper with nuclear command and control systems or fool early warning mechanisms, causing confusion in the enemy’s networks and further inhibiting a response. Such fears of what AI can do make nuclear strategy harder and riskier. 

For some, just like the Cold War strategists who deployed the expert systems in SAGE and the Dead Hand, the answer to these new fears is more automation. The commander of Russia’s Strategic Rocket Forces has said that the original Dead Hand has been improved upon and is still functioning, though he didn’t offer technical details. In the United States, some proposals call for the development of a new Dead Hand–esque system to ensure that any first strike is met with nuclear reprisal,with the goal of deterring such a strike. It is a prospect that has strategic appeal to some warriors but raises grave concern for Cassandras, whowarn of the present frailties of machine learning decision-making, and for evangelists, who do not want AI mixed up in nuclear brinkmanship.

While the evangelists’ concerns are more abstract, the Cassandras have concrete reasons for worry. Their doubts are grounded in storieslike Petrov’s, in which systems were imbued with far too much trust and only a human who chose to disobey orders saved the day. The technical failures described in chapter 4 also feed their doubts. The operational risks of deploying fallible machine learning into complex environments like nuclear strategy are vast, and the successes of machine learning in other contexts do not always apply. Just because neural networks excel at playing Go or generating seemingly authentic videos or even determining how proteins fold does not mean that they are any more suited than Petrov’s Cold War–era computer for reliably detecting nuclear strikes.In the realm of nuclear strategy, misplaced trust of machines might be deadly for civilization; it is an obvious example of how the new fire’s force could quickly burn out of control. 

Of particular concern is the challenge of balancing between false negatives and false positives—between failing to alert when an attack is under way and falsely sounding the alarm when it is not. The two kinds of failure are in tension with each other. Some analysts contend that American military planners, operating from a place of relative security,worry more about the latter. In contrast, they argue that Chinese planners are more concerned about the limits of their early warning systems,given that China possesses a nuclear arsenal that lacks the speed, quantity, and precision of American weapons. As a result, Chinese government leaders worry chiefly about being too slow to detect an attack in progress. If these leaders decided to deploy AI to avoid false negatives,they might increase the risk of false positives, with devastating nuclear consequences. 

The strategic risks brought on by AI’s new role in nuclear strategy are even more worrying. The multifaceted nature of AI blurs lines between conventional deterrence and nuclear deterrence and warps the established consensus for maintaining stability. For example, the machine learning–enabled battle networks that warriors hope might manage conventional warfare might also manage nuclear command and control. In such a situation, a nation may attack another nation’s information systems with the hope of degrading its conventional capacity and inadvertently weaken its nuclear deterrent, causing unintended instability and fear and creating incentives for the victim to retaliate with nuclear weapons. This entanglement of conventional and nuclear command-and-control systems, as well as the sensor networks that feed them, increases the risks of escalation. AI-enabled systems may like-wise falsely interpret an attack on command-and-control infrastructure as a prelude to a nuclear strike. Indeed, there is already evidence that autonomous systems perceive escalation dynamics differently from human operators. 

Another concern, almost philosophical in its nature, is that nuclear war could become even more abstract than it already is, and hence more palatable. The concern is best illustrated by an idea from Roger Fisher, a World War II pilot turned arms control advocate and negotiations expert. During the Cold War, Fisher proposed that nuclear codes be stored in a capsule surgically embedded near the heart of a military officer who would always be near the president. The officer would also carry a large butcher knife. To launch a nuclear war, the president would have to use the knife to personally kill the officer and retrieve the capsule—a comparatively small but symbolic act of violence that would make the tens of millions of deaths to come more visceral and real. 

Fisher’s Pentagon friends objected to his proposal, with one saying,“My God, that’s terrible. Having to kill someone would distort the president’s judgment. He might never push the button.” This revulsion, ofcourse, was what Fisher wanted: that, in the moment of greatest urgency and fear, humanity would have one more chance to experience—at an emotional, even irrational, level—what was about to happen, and one more chance to turn back from the brink. 

Just as Petrov’s independence prompted him to choose a different course, Fisher’s proposed symbolic killing of an innocent was meant to force one final reconsideration. Automating nuclear command and control would do the opposite, reducing everything to error-prone, stone-coldmachine calculation. If the capsule with nuclear codes were embedded near the officer’s heart, if the neural network decided the moment was right, and if it could do so, it would—without hesitation and without understanding—plunge in the knife.

アップル、オランダ政府から9回目の罰金。「サードパーティのアプリ決済を許可すべし」命令を守らず

アップルがオランダ独禁当局の「現地のApp Storeにて出会い系アプリ開発者が自社のアプリ内決済システム以外の決済方法を使えるようにすべし」とのの命令を遵守しなかったため、9回目の罰金を科せられたと報じられています。…