After 24 years Black Star is back, but only on the Luminary podcasting platform

In 1998, Brooklyn-based hip-hop superstars Talib Qweli and yasiin bey (still then going by Mos Def and capitalizing his name) found themselves recording solo albums at the same time. With the support of DJ and producer, Hi-Tek, they put their individual projects on hold and made Mos Def and Talib Kweli are Black Star, one of the most critically acclaimed albums in the history of hip-hop. 

Now, Qweli and bey, this time with Madlib on the boards, have announced the imminent release of Black Star’s sophomore album, No Fear of Time, on May 3rd. But for indiscernible reasons, the collective’s first drop in nearly a quarter century is exclusive to the Luminary podcast network.

“About 3-4 years ago I was visiting yasiin in Europe and we started to talk about songs to do on an album,” Kweli recalled in a Friday press release, “so I flew an engineer out just to see what that would be. Once I realized this conversation is starting to organically become a creative conversation, I started making sure to have the engineer around at all times. There was one day we were just in a hotel listening to Madlib beats, and he’s like ‘Play that Madlib tape again.’ I’m playing the beats and he starts doing rhymes to the beats. And that’s how we did the first song.”

Kweli added, “This is very similar to how we did the first album. But the first album, there were no mobile studios. This entire album, we have not set foot in one recording studio. It’s all been done in hotel rooms and backstage at Dave Chappelle shows.”

The 9-track album drops on May 3rd. You’ll need a Luminary subscription ($3 a month after a 7-day trial) or access to Apple Podcasts in order to listen. 

OpenAI’s DALL-E 2 produces fantastical images of most anything you can imagine

In January, 2021, the OpenAI consortium — founded by Elon Musk and financially backed by Microsoft — unveiled its most ambitious project to date, the DALL-E machine learning system. This ingenious multimodal AI was capable of generating images (albeit, rather cartoonish ones) based on the attributes described by a user — think “a cat made of sushi” or “an x-ray of a Capybara sitting in a forest.” On Wednesday, the consortium unveiled DALL-E’s next iteration which boasts higher resolution and lower latency than the original. 

A bowl of soup that looks like a monster, knitted out of wool.
OpenAI

The first DALL-E (a portmanteau of “Dali,” as in the artist, and “WALL-E,” as in the animated Disney character) could generate images as well as combine multiple images into a collage, provide varying angles of perspective, and even infer elements of an image — such as shadowing effects — from the written description. 

“Unlike a 3D rendering engine, whose inputs must be specified unambiguously and in complete detail, DALL·E is often able to ‘fill in the blanks’ when the caption implies that the image must contain a certain detail that is not explicitly stated,” the OpenAI team wrote in 2021.

Macro 35mm film photography of a large family of mice wearing hats cozy by the fireplace.
OpenAI

DALL-E was never intended to be a commercial product and was therefore somewhat limited in its abilities given the OpenAI team’s focus on it as a research tool, it’s also been intentionally capped to avoid a Tay-esque situation or the system being leveraged to generate misinformation. Its sequel has been similarly sheltered with potentially objectionable images preemptively removed from its training data and a watermark indicating that its an AI-generated image automatically applied. Additionally, the system actively prevents users from creating pictures based on specific names. Sorry, folks wondering what “Christopher Walken eating a churro in the Sistine Chapel” would look like.    

DALL-E 2, which utilizes OpenAI’s CLIP image recognition system, builds on those image generation capabilities. Users can now select and edit specific areas of existing images, add or remove elements along with their shadows, mash-up two images into a single collage, and generate variations of an existing image. What’s more, the output images are 1024px squares, up from the 256px avatars the original version generated. OpenAI’s CLIP was designed to look at a given image and summarize its contents in a way humans can understand. The consortium reversed that process, building an image from its summary, in its work with the new system.

Teddy bears mixing sparkling chemicals as mad scientists.
OpenAI

“DALL-E 1 just took our GPT-3 approach from language and applied it to produce an image: we compressed images into a series of words and we just learned to predict what comes next,” OpenAI research scientist Prafulla Dhariwal told Verge.

Unlike the first, which anybody could play with on the OpenAI website, this new version is currently only available for testing by vetted partners who themselves are limited in what they can upload or generate with it. Only family-friendly sources can be utilized and anything involving nudity, obscenity, extremist ideology or “major conspiracies or events related to major ongoing geopolitical events” are right out. Again, sorry to the folks hoping to generate “Donald Trump riding a naked, COVID-stricken Nancy Pelosi like a horse through the US Senate on January 6th while doing a Nazi salute.”

A photo of an astronaut riding a horse.
OpenAI

The current crop of testers are also banned from exporting their generated works to a third-party platform though OpenAI is considering adding DALL-E 2’s abilities to its API in the future. If you want to try DALL-E 2 for yourself, you can sign up for the waitlist on OpenAI’s website.

Hitting the Books: Raytheon, Yahoo Finance and the rise of the ‘cybersmear’ lawsuit

A company’s public image is arguably even more important to its bottom line than the product they produce and very much not something to be trifled with. Would Disney be the entertainment behemoth it is today if not for its family-friendly facade, would Google have garnered so much goodwill if not for its “don’t be evil” motto? Nobody’s going to buy your cars if they think the company is run by some “pedo guy.” With the scale of business that modern tech giants operate at and the amounts of money at stake, it’s little surprise that these titans of industry will eagerly leverage their legal departments to quash even the slightest sullying of their reputations. But they can only Cease and Desist you if they can find.

In The United States of Anonymous: How the First Amendment Shaped Online Speech, associate professor of cybersecurity law in the United States Naval Academy Cyber Science Department and author Jeff Kosseff explores anonymity’s role in American politics and society, from its colonial and revolutionary era beginnings, to its extensive use by the civil rights movement, to the modern online Damocles sword it is today. In the excerpt below, Kosseff recounts the time that Raytheon got so mad by posts on the Yahoo! Finance message board, that it tried to subpoena Yahoo! to give up the real life identities of anonymous users so it could in turn sue them for defamation.

US of Anonymous cover
Cornell University Press

Reprinted from The United States of Anonymous: How the First Amendment Shaped Online Speech, by Jeff Kosseff. Copyright (c) 2022 by Cornell University. Used by permission of the publisher, Cornell University Press.


“BONUSES WILL HAPPEN—BUT WHAT ARE THEY REALLY?”

That was the title of a November 1, 1998, thread on the Yahoo! Finance bulletin board dedicated to tracking the financial performance of Raytheon, the mammoth defense contractor. Like many publicly traded companies at the time, Raytheon was the subject of a Yahoo! Finance message board, where spectators commented and speculated on the company’s financial status. Yahoo! allowed users to post messages under pseudonyms, so its Finance bulletin boards quickly became a virtual — and public — water cooler for rumors about companies nationwide.

The Yahoo! Finance boards largely operated on the “marketplace of ideas” approach to free speech theory, which promotes an unregulated flow of speech, allowing the consumers of that speech to determine its veracity. Although Yahoo! Finance may have aspired to represent the marketplace of ideas, the market did not always quickly sort the false from the true. During the dot-com boom of the late 1990s, Yahoo! Finance users’ instant speculation about a company’s financial performance and stock price took on new importance to investors and companies. But some of these popular bulletin boards contained comments that were not necessarily helpful to fostering productive financial discussion. “While many message boards perform their task well, others are full of rowdy remarks, juvenile insults and shameless stock boosterism,” the St. Petersburg Times wrote in 2000. “Some boards are abused and fall prey to posters who try to manipulate a company’s stock, typically by pushing up its price with misleading information, then selling the stock near its peak.”

Corporate executives and public relations departments routinely monitored the bulletin boards, keenly aware that one negative post could affect employee morale and, more importantly, stock prices. And they did not have faith in the marketplace of ideas sorting out the truth from the falsities. While companies were accustomed to handling negative press coverage, the pseudonymous criticism on Yahoo! Finance was an entirely different world. Executives knew to whom they could complain if a newspaper’s business columnist wrote about inflated share prices or pending layoffs. Yahoo! Finance’s commenters, on the other hand, typically were not easily identifiable. They could be disgruntled employees, shareholders, or even executives.

The reputation-obsessed companies and executives could not use the legal system to force Yahoo! to remove posts that they believed were defamatory or contained confidential information. In February 1996, Congress passed Section 230 of the Communications Decency Act, which generally prevents interactive computer services—such as Yahoo!—from being “treated as the publisher or speaker” of user content. In November 1997, a federal appellate court construed this immunity broadly, and other courts soon followed. Congress passed Section 230 in part to encourage online platforms to moderate objectionable content, and the statute creates a nearly absolute bar to lawsuits for defamation and other claims arising from third-party content, whether or not they moderate. Section 230 has a few exceptions, including for intellectual property law and federal criminal law enforcement. Section 230 meant that an angry subject of a Yahoo! Finance post could not successfully sue Yahoo! for defamation, but could sue the poster. That person, however, often was difficult to identify by screen name.

Not surprisingly, the Yahoo! Finance bulletin boards would become the first major online battleground for the right to anonymous speech. Companies’ attempts in the late 1990s to unmask Yahoo! Finance posters would set the stage for decades of First Amendment battles over online anonymity.

A November 1, 1998, reply in the Raytheon bonuses thread came from a user named RSCDeepThroat. The four-paragraph post speculated on the size of bonuses. “Yes, there will be bonuses and possibly for only one year,” RSCDeepThroat wrote. “If they were really bonuses, the goals for each segment would have been posted and we would have seen our progress against them. They weren’t, and what we get is black magic. Even the segment execs aren’t sure what their numbers are.” RSCDeepThroat predicted bonuses would be less than 5 percent. “That’s good as many sites are having rate problems largely due to the planned holdback of 5%. When it becomes 2%, morale will take a hit, but customers on cost-plus jobs will get money back and we will get bigger profits on fixed-price jobs.”

RSCDeepThroat posted again, on January 25, 1999, in a thread with the title “98 Earnings Concern.” The poster speculated about business difficulties at Raytheon’s Sensors and Electronics Systems unit. “Word running around here is that SES took a bath on some programs that was not discovered until late in the year,” RSCDeepThroat posted. “I don’t know if the magnitude of those problems will hurt the overall Raytheon bottom line. The late news cost at least one person under Christine his job. Maybe that is the apparent change in the third level.” The poster speculated that Chief Executive Dan Burnham “is dedicated to making Raytheon into a lean, nimble, quick competitor.” Although RSCDeepThroat did not provide his or her real name, the posts’ discussion of specifics—such as the termination of someone who worked for “Christine”—suggested that RSCDeepThroat worked for Raytheon or was receiving information from a Raytheon employee.

RSCDeepThroat and the many other people who posted about their employers on Yahoo! Finance had good reason to take advantage of the pseudonymity that the site provided. Perhaps the most important driver was the Economic Motivation; if their real names were linked to their posts, they likely would lose their jobs. Likewise, the Legal Motivation drove their need to protect their identities, as many employers had policies against disclosing confidential information, and some companies require their employees to sign confidentiality agreements. And the Power Motivation also was a likely factor in the behavior of some Yahoo! Finance posters—suddenly, the words and feelings of everyday employees mattered to the company’s top executives.

Raytheon sought to use its legal might to silence anonymous posters. The prospect of inside information being blasted across the Internet apparently rankled Raytheon’s executives so much that the company sued RSCDeepThroat and twenty other Yahoo! Finance posters for breach of contract, breach of employee policy, and trade secret misappropriation in state court in Boston. In the complaint, the company wrote that all Raytheon employees are bound by an agreement that prohibits unauthorized disclosure of the company’s proprietary information. Raytheon claimed that RSCDeepThroat’s November post constituted “disclosure of projected profits,” and the January post was “disclosure of inside financial issues.”

Raytheon’s complaint stated only that the company sought damages in excess of twenty-fi ve thousand dollars. Litigating this case might cost more than any money the company would recover in settlements or jury verdicts. The lawsuit would, however, allow Raytheon to attempt to gather information to identify the authors of the critical posts.

Raytheon’s February 1, 1999, complaint was among the earliest of what would become known as a “cybersmear lawsuit,” in which a company filed a complaint against (usually pseudonymous) online critics. Because of its high visibility and large number of pseudonymous critics, Yahoo! Finance was ground zero for cybersmear lawsuits.

​​Because Raytheon only had the posters’ screen names, the defendants listed on the complaint included RSCDeepThroat, WinstonCar, DitchRaytheon, RayInsider, RaytheonVeteran, and other monikers that provided no information about the posters’ identities. To appreciate the barriers that the plaintiffs faced, it first is necessary to understand the taxonomy that applies to the levels of online identity protection. This was best explained in a 1995 article by A. Michael Froomkin. He summarized four levels of protection:

  • Traceable anonymity: “A remailer that gives the recipient no clues as to the sender’s identity but leaves this information in the hands of a single intermediary.”

  • Untraceable anonymity: “Communication for which the author is simply not identifiable at all.”

  • Untraceable pseudonymity: The message is signed with a pseudonym that cannot be traced to the original author. The author might use a digital signature “which will uniquely and unforgeably distinguish an authentic signed message from any counterfeit.”

  • Traceable pseudonymity: “Communication with a nom de plume attached which can be traced back to the author (by someone), although not necessarily by the recipient.” Froomkin wrote that under this category, a speaker’s identity is more easily identifiable, but it more easily allows communication between the speaker and other people.

Although traceable anonymity and traceable pseudonymity are not substantially diff erent from a technical standpoint—in both cases, the speakers can be identified, Margot Kaminski argues that a speaker’s choice to communicate pseudonymously rather than anonymously might have an impact on their expression because pseudonymous communication “allows for the adoption of a developing, ongoing identity that can itself develop an image and reputation.”

Yahoo! Finance largely fell into the category of traceable pseudonymity. Yahoo! did not require users to provide their real names before posting. But it did require them to use a screen name and asked for an email address (though there often was no guarantee that the email address alone would reveal their identifying information). It automatically logged their Internet Protocol (IP) addresses, unique numbers associated with a particular Internet connection. Plaintiff’s could use the legal system to obtain this information, which could lead to their identities, albeit with no guarantee of success.

Many Americans distrust emerging technology, new study finds

For more than a century, popular science fiction has promised us a future filled with robotics and AI technologies. In 2022, many of those dreams are being realized — computers recognize us on sight and cars can drive themselves, we’re building intelligent exoskeletons that multiply our strength and implanting computers in our skulls to augment our intelligence — but that doesn’t mean most of America trusts these breakthrough technologies any further than they can throw them. Quite the opposite, in fact.

A recently published survey from Pew Research sought the opinions of some 10,260 US adults in November 2021 regarding their views on six technologies emerging in the fields of robotics and artificial intelligence/machine learning. Specifically, canvassers asked about both more mainstream systems like the use of facial recognition technology by police, the fake news-flagging algorithms used by social media platforms, and autonomous vehicle technology, as well as more cutting-edge ideas like brain-computer interfaces, gene editing and powered exoskeletons. The responses largely topped out at tepid, with minorities of respondents having even heard much about a given technology and even fewer willing to become early adopters once these systems are available to the general public.

The Pew research team found a number of broad trends regarding which demographics were most accepting of these advances. College-educated white male Millennials and Gen Xers versed in the tech’s development were far more willing to ride in a driverless taxi or let Elon Musk rummage around in their heads. Women, Boomers, and folks hearing about BCIs for the first time, much less so. The Pew team also noted correlations between acceptance of a given technology and a person’s religious affinity and level of education.

Pew Overview AI opinions
Pew Research Center

Police Use of Facial Recognition

Computer vision systems and facial recognition technology is already widespread. Amazon uses it in its cashierless Go stores, Facebook uses it to moderate user-posted content, the IRS recently, briefly, considered using it in tax filings, and law enforcement has embraced the technology for criminal investigations and missing persons cases. The survey’s respondents largely believed that continued use in law enforcement would “likely help find missing persons and solve crimes,” but also conceded that “it is likely that police would use this technology to track everyone’s location and surveil Black and Hispanic communities more than others.”

In all, 46 percent of respondents thought widespread facial recognition use by the police would be a “good idea” for society, while 27 percent figured it would be bad and another 27 percent were unsure either way. Both Americans over 50 and those with a high school diploma or less agreed in equal measure (52 percent of respondents) that it would be a net positive, though the researchers note that people who “have heard or read a lot about the use of facial recognition technology by police” are far more likely to say it’s a bad idea.

Whether they think police using facial recognition is net good or bad for society, a majority of the respondents agree that even if the technology were to become ubiquitous, it would have little impact on crime rates. Some 57 percent of those surveyed guess that rates will remain steady while another 8 percent of them are rooting for the maniacs and figure crime will actually increase in response to adoption of this technology.

Partisan divide on AI
Pew Research Center

Social Media Moderation Algorithms

Lying is as fundamental a part of the internet as subnet masks – just ask any dog. But with 70 percent of the American populace online and on social media, the smallest morels of misinformation and biggest lies can become massively amplified as they spread via recommendation algorithms, often blurring the lines between reality and political fantasy. In an effort to prevent people from falling down internet rabbit holes, many social media companies have instituted additional AI systems to monitor and moderate misinformation posted to their platforms. And if you think the American people trust those algorithms, hoo boy, do I have some ivermectin to sell you.

Only 38 percent of those surveyed thought that using algorithms to monitor these digital hellscapes was a good idea for society. That’s 3 points lower than Trump’s average approval rating during his tenure. The remaining 62 percent of respondents were split evenly between ambivalence and thinking it would be bad for society. Overall, a majority believe that these automated moderation efforts “are not helping the social media information environment and at times might be worsening it,” per the report.

Unsurprisingly, opinions on this matter skew heavily depending on the respondent’s political affiliation. Majorities of both Democrats and Republicans agree that “political censorship and wrongful removal of information are definitely or probably happening as a result of the widespread use of these algorithms,” it is the latter group who are far more likely to say so.

Republicans and those leaning R were 28 percent more likely to believe in political censorship on the part of algorithms and 26 percent more likely to believe they were wrongly removing information.

Conversely, Democrats and D learners were twice as likely to “say it is getting easier to find trustworthy information on social media sites due to widespread use of algorithms” and those that hold that opinion are 19 percent more likely than Republicans to believe that algorithms are “allowing people to have more meaningful conversations.”

As with facial recognition, the amount of experience one has with the technology impacts their views on it, leaning negative among those with the most exposure and around half of respondents thinking algorithms a bad idea.

A Waymo self-driving car pulls into a parking lot at the Google-owned company's headquarters in Mountain View, California, on May 8, 2019. (Photo by Glenn CHAPMAN / AFP)        (Photo credit should read GLENN CHAPMAN/AFP via Getty Images)
GLENN CHAPMAN via Getty Images

Autonomous Vehicles

Perhaps the most visible technology that the Pew team inquired about is vehicle automation. We’re already seeing driverless taxis cruise the streets of San Francisco while advanced driver assist systems rapidly evolve, despite the occasional kamikaze strike against nearby emergency response vehicles. The Pew team asked people, “How will this impact people who drive for a living? Are Americans willing to give up control to a machine? And whose safety should be prioritized in a potential life-or-death situation?” The people responded, “Bad, no, and pedestrians, but if we really have to.”

Respondents thought that the widespread use of driverless passenger vehicles is a bad idea for society by an 18-point margin (44 percent bad to 26 percent good), with nearly a third of people unsure. What’s more, the number of people unwilling to even ride in a fully autonomous vehicle is nearly double those who would take the ride (63 percent no to 37 percent yes). Older Americans are far less likely to get behind the wheel of an autonomous vehicle than those under 50, with only 25 percent of 50-plus-year-olds open to the idea compared to 47 percent of younger respondents. Men are more willing to ride in a driverless car than women — 46 percent versus 27 percent — as are people with a bachelor’s degree or higher compared to high school graduates.

Americans’ reticence extends to the other side of the door as well. Forty-five percent of the Pew’s respondents, “say they would not feel comfortable sharing the road with driverless vehicles if use of them became widespread,” including 18 percent who would “not feel comfortable at all.” Only 7 percent said they would be “extremely comfortable” sharing the road.

That’s not to say that Americans are completely against the idea of self-driving vehicles. A whopping 72 percent of people surveyed said that autonomous cars would help the elderly and disabled to live more independent lives while 56 percent figure it will make trips less stressful. But they are widely concerned (as in, 83 percent of them) that widespread adoption of the tech would cause drivers and delivery personnel to lose their jobs and 76 percent think the technology will put vehicles at risk of being hacked.

The Trolley Problem is Solved
Pew Research Center

In terms of safety, 39 percent of people think that traffic deaths and injuries will fall once autonomous vehicles become ubiquitous while 27 percent think they’ll rise. Regardless of which direction folks think these trends will go, they agree at a rate of more than 2 to 1 (40 percent to 18) that “the computer system guiding the driverless car should prioritize the safety of the vehicle’s passengers, rather than those outside of the vehicle” in the event of an unavoidable crash. Turns out the trolley problem wasn’t that tough to solve after all.

Pew’s other three topics — BCIs, gene editing and exoskeletons — are not nearly as commercially available as ADA systems and facial recognition, but that hasn’t stopped Americans from inherently distrusting them even if they’re also kind of intrigued by the possibility.

Two-thirds of respondents would be “at least somewhat excited about the possibility of changing human capabilities to prevent serious diseases or health conditions” including 47 percent excited for cognitive enhancements, 24 percent on board for auditory enhancement, 44 percent in favor of strength augmentations and 41 percent apiece for visual and longevity enhancements. But only half of those surveyed would want these procedures done for themselves or their children.

How these technologies are employed makes a big difference in people’s opinion of them. For example, 79 percent of respondents are in favor of exoskeletons, so long as they are used to help the physically disabled, 77 percent want BCIs if they’ll help paralyzed people motor function and 71 percent are cool with gene editing to fix a person’s current disease or health condition. But at the same time, 74 percent are against using CRISPR to make more attractive babies and 49 percent are against giving exoskeletons to recreational users.

The first Cybathlon will be held in Switzerland in October 2016. It is a competition for athletes equipped with bionic devices (robotized prosthetic legs and arms, motorized wheelchairs, exoskeletons, bikes using electrical muscle stimulation and brain-computer interface races). This competition helps raise public awareness on the evolution of work on robotic assistive technology and strengthens exchanges between research teams. Among the French teams is ENS Lyon. This team will take part in the cycle race, with a bike that has electrical muscle stimulation, as well as the brain-computer interface race during which tetraplegic athletes steer their avatar during virtual races using brain signals. The team leader, Vince, will be one of the pilots. He has been tetraplegic since a bike accident, and is a physics researcher. He changed his research to technology, which can help handicapped people in their daily lives. He is training for the brain computer interface race with Amine, a postdoc researcher in physics. He is wearing an EEG helmet which enables his neuronal activity to be tracked. The team has to determine the signals that are the most perceptible and isolatable on the EEG so that a computer action can be attributed to an ÒorderÓ from the brain. (Photo by: BSIP/Universal Images Group via Getty Images)
BSIP via Getty Images

Brain Computer Interfaces

The days of Johnny Mnemonic are never going to arrive if the study’s average respondent has their way. Fifty-six percent of US adults think the widespread adoption of BCIs will be a bad thing for society (compared to just 13 percent dissenting). Seventy-eight percent are against having one installed, versus 20 percent actively in favor, and yet roughly 60 percent of them say that “people would feel pressure” to get a BCI “should implanted devices of this sort become widespread.”

Men, ever the eager guinea pigs, are far more open to getting chipped than women (20 percent to 6), though at least half of both genders (50 percent of men and 61 percent of women) possess sufficient survival instincts to decline the opportunity. However, people were more receptive to the idea if the option to manually turn the implant on and off (59 percent in favor) were included or if implantation didn’t require surgery (53 percent in favor).

What’s more, only 24 percent of US adults believe that this augmentation would lead to improvements in judgment and decision-making compared to 42 percent who do not. Seventy percent also believe that such implants “would go too far in eliminating natural differences between people.“

Abstract Genetics Disease - 3d rendered image. Hologram view. SEM (TEM) macroscope image. DNA mutations. Vexas disease. Medicine Healthcare research concept. X chromosomes objects.
koto_feja via Getty Images

Editing gene to fight preventable disease

Just like many people think brain implants are cool but not for them personally, nearly half of Americans (49 percent) would decline to have their child’s genome edited to prevent hereditary diseases. Fifty-two percent believe that such edits would be “crossing a line we should not cross” compared to 46 percent who say it is inline with previous efforts at augmenting human capabilities.

While only 39 percent of Americans foresee a future where gene editing is common making people’s lives better (versus 40 percent for no change and 18 percent for worse), some 73 percent believe “most parents would feel pressure to get gene editing for their baby if such techniques became widespread.” More than half say these genomic procedures should be restricted to adults who can give consent, though 49 percent say that allowing people to choose which disease is treated would be more acceptable.

Robotic exoskeletons to augment physical capabilities

Even if we’re not poking electrical leads into your various motor cortices or using atomic shears to play Tetris with your chromosomes, Americans just aren’t into using tech to endow humans with heightened capabilities. Only a third of people think the adoption of exoskeletons like the Cray X from German Bionic would lead to better working conditions while 31 percent of those surveyed thought it would make matters worse. Overall, just 33 percent of people think these systems would be good for society, while nearly a quarter (24 percent) think it will be bad. That said, 57 percent of people also told Pew that they’d heard nothing about exoskeletons with which to inform their opinions compared to 37 percent having heard “a little” and 6 percent “a lot.”

It's Sigourney Weaver in the P-5000 looking badass
20th Century Studios

Estimates of the technology rose with familiarity with 48 percent of those having heard even a little responding that it would be good for society compared to 22 percent from those who’d heard nothing. Men took a moment from fantasizing about the P-5000 to answer in the affirmative at a rate of more than 2 to 1 (46 percent to 19) that exoskeletons are good and cool and how do I get one. Women, meanwhile, believe their widespread adoption would be a detriment to society by a margin of 29 percent to 21.

Respondents were largely concerned with the economic impacts this technology would have on the labor market. Eighty-one percent of Americans fear it would prompt employers to lay off human workers, while 73 percent are worried that “workers would probably or definitely lose strength from relying too much on the exoskeletons.”

A baggage handler wears a Cray X Exoskeleton from German Bionic while handling a baggage.
German Bionic

Still, the respondents did often see the potential benefits of employing exoskeletons in the workplace. Approximately 70 percent said workers would “probably or definitely” be hurt less on the job and 65 percent believe that the tech will open the field of manual labor to people who otherwise wouldn’t be physically capable of doing the work. Respondents were also broadly in favor (68 percent) of requiring a license to operate these devices, using them to assist people with physical limitations (79 percent). Those surveyed were also strongly in favor (77 percent) of letting firefighters use the tech to boost their abilities in emergencies.

Microsoft’s online-only Build 2022 event kicks off May 24

Since the start of the COVID-19 outbreak in March, 2020, Microsoft’s annual Build conference for developers, engineers and IT professionals has been held online. But after two years of lockdowns, and nearly 15 months since the release of effective vaccines, Build 2022 will once again be hosted online, from May 24th through 26th.

While the conference is largely geared towards professionals, plenty of consumer-facing tech has emerged from previous years’ events, from advances to Microsoft Teams to the “next generation” of Windows. This year, attendees will “experience market-specific content and connection opportunities for France, Germany, Japan, Latin America, and the UK in Regional Spotlights,” according to the conference’s launch site, in addition to the standard slate of keynotes, workshops, and networking opportunities.

Registration for the conference opens in late April and will be free. You can check out the event agenda at the Build 2022 homepage.  

After 355 days aboard the ISS, astronaut Mark Vande Hei returns to Earth a changed man

After 355 days aboard the ISS, NASA astronaut and five-time flight engineer Mark T Vande Hei returns to Earth as record holder for the longest single spaceflight in NASA history, having surpassed Commander Scott Kelly’s 340-day mark set in 2018. Though not as long as Peggy Whitson’s 665 cumulative days spent in microgravity, Vande Hei’s accomplishment is still one of the longest single stints in human spaceflight, just behind Russia’s Valeri Polyakov, who was aboard the Mir for 438 straight days (that’s more than 14 months) back in the mid-1990s.

Though NASA’s Human Research Program has spent 50 years studying the effects that microgravity and the rigors of spaceflight have on the human body, the full impact of long-duration space travel has yet to be exhaustively researched. As humanity’s expansion into space accelerates in the coming decades, more people will be going into orbit — and much farther — both more regularly and for longer than anyone has in the past half century, and they’ll invariably need medical care while they’re out there. To fill that need, academic institutes like the Center for Space Medicine at the Baylor College of Medicine in Houston, TX, have begun training a new generation of medical practitioners with the skills necessary to keep tomorrow’s commercial astronauts alive on the job.

Even traveling the relatively short 248-mile distance to the International Space Station does a number on the human body. The sustained force generated during liftoff can hit 3 gs, though “the most important factors in determining the effects the sustained acceleration will have on the human body is the rate of onset and the peak sustained g force,” Dr. Eric Jackson wrote in his 2017 dissertation, An Investigation of the Effects of Sustained G-Forces on the Human Body During Suborbital Spaceflight. “The rate of onset, or how fast the body accelerates, dictates the ability to remain conscious, with a faster rate of onset leading to a lower g-force threshold.”

Untrained civilians will begin feeling these effects at 3 to 4 gs but with practice, seasoned astronauts using support equipment like high-g suits can resist the effects until around 8 or 9 gs, however the unprotected human body can only withstand about 5 gs of persistent force before blacking out.

Once the primary and secondary rocket stages have been expended, the pleasantness of the spaceflight will improve immensely, albeit temporarily. As NASA veteran with 230 cumulative days in space, Leroy Chiao, told Space in 2016, as soon as the main engines cut out, the crushing Gs subside and “you are instantly weightless. It feels as if you suddenly did a forward roll on a gym mat, as your brain struggles to understand the odd signals coming from your balance system.”

“Dizziness is the result, and this can again cause some nausea,” he continued. “You also feel immediate pressure in your head, as if you were lying down head first on an incline. At this point, because gravity is no longer pulling fluid into your lower extremities, it rises into your torso. Over the next few days, your body will eliminate about two liters of water to compensate, and your brain learns to ignore your balance system. Your body equilibrates with the environment over the next several weeks.”

Roughly half of people who have traveled into orbit to date have experienced this phenomenon, which has been dubbed Space Adaptation Syndrome (SAS), though as Chiao noted, the status debuffs do lessen as the astronaut’s vestibular system readjusts to their weightless environment. And even as the astronaut adapts to function in their new microgravity surroundings, their body is undergoing fundamental changes that will not abate, at least until they head back down the gravity well.

“After a long-duration flight of six or more months, the symptoms are somewhat more intense,” Chiao said. “If you’ve been on a short flight, you feel better after a day or two. But after a long flight, it usually takes a week, or several, before you feel like you’re back to normal.”

“Spaceflight is draining because you’ve taken away a lot of the physical stimulus the body would have on an everyday basis,” Dr. Jennifer Fogarty from Baylor’s Center for Space Medicine, told Engadget.

“Cells can convert mechanical inputs into biochemical signals, initiating downstream signaling cascades in a process known as mechanotransduction,” researchers from the University of Siena noted in their 2021 study, The Effect of Space Travel on Bone Metabolism. “Therefore, any changes in mechanical loading, for example, those associated with microgravity, can consequently influence cell functionality and tissue homeostasis, leading to altered physiological conditions.”

Without those sensory inputs and environmental stressors that would normally prompt the body to maintain its current level of fitness, our muscles will atrophy — up to 40 percent of their mass, depending on the length for the mission — while our bones can lose their mineral density at a rate of 1 to 2 percent every month.

“Your bones are … being continually eaten away and replenished,” pioneering Canadian astronaut Bjarni Tryggvason told CBC in 2013. “The replenishment depends on the actual stresses in your bones and it’s mainly … bones in your legs where the stresses are all of a sudden reduced [in space] that you see the major bone loss.”

This leaves astronauts highly susceptible to breaks, as well as kidney stones, upon their return to Earth and generally require two months of recovery for every month spent in microgravity. In fact, a 2000 study found that the bone loss from six months in space “parallels that experienced by elderly men and women over a decade of aging on Earth.” Even intensive daily sessions with the treadmill, cycle ergometer and ARED (Advanced Resistance Exercise Device) aboard the ISS, paired with a balanced nutrient-rich diet, has only shown to be partially effective at offsetting the incurred mineral losses.

And then there’s the space anemia. According to a study published in the journal, Nature Medicine, the bodies of astronauts appear to destroy their red blood cells faster while in space than they would here on Earth. “Space anemia has consistently been reported when astronauts returned to Earth since the first space missions, but we didn’t know why,” study author Guy Trudel said in a January 14 statement. “Our study shows that upon arriving in space, more red blood cells are destroyed, and this continues for the entire duration of the astronaut’s mission.”

This is not a short term adaptation as previously believed, the study found. The human body on Earth will produce and destroy around 2 million red blood cells every second. However, that number jumps to roughly 3 million per second while in space, a 54 percent increase that researchers attribute to fluid shifts in the body as it adapts to weightlessness.

Recent research also suggests that our brains are actively “rewiring” themselves in order to adapt to microgravity. A study published in Frontiers in Neural Circuits investigated structural changes found in white matter, which interfaces the brain’s two hemispheres, after space travel using MRI data collected from a dozen Cosmonauts before and after their stays aboard the ISS, for about 172 days apiece. Researchers discovered changes in the neural connections between different motor areas within the brain as well as changes to the shape of the corpus callosum, the part of the brain that connects and interfaces the two hemispheres, again due to fluid shifts.

“These findings give us additional pieces of the entire puzzle,” study author Floris Wuyts of Floris Wuyts, University of Antwerp told Space. “Since this research is so pioneering, we don’t know how the whole puzzle will look yet. These results contribute to our overall understanding of what’s going on in the brains of space travelers.”

As the transition towards commercial space flight accelerates and the orbital economy further opens for business, opportunities to advance space medicine increase as well. Fogarty points out that government space flight programs and installations are severely limited in the number of astronauts they can handle simultaneously — the ISS holds a whopping seven people at a time — which translates into multi-year long queues for astronauts waiting to go into space. Commercial ventures like Orbital Reef will shorten those waits by expanding the number of space-based positions available which will give institutions like the Center for Space Medicine more, and more diversified, health data to analyze.

“The diversity of the types of people that are capable and willing to go [into space for work] really opens up this aperture on understanding humanity,” Fogarty said, “versus the [existing] select population that we always struggle to match to or interpret data from.”

Even returning from space is fraught with physiological peril. Dr. Fogarty points out that while in space the gyroscopic organs in the inner ear will adapt to the new environment, which is what helps alleviate the symptoms of SAS. However, that adaptation works against the astronaut when they return to full gravity — especially the chaotic forces present during reentry — they can be shocked by the sudden return of amplified sensory information. It’s roughly equivalent, she describes, to continuing to turn up the volume on a stereo with a wonky input port: You hear nothing as you rotate the knob, right up until the moment the input’s plug wiggles just enough to connect and you blow your eardrums out because you’d dialed up the volume to 11 without realizing it.

“Your brain has acclimated to an environment, and very quickly,” Fogarty said. “But the organ systems in your ear haven’t caught up to the new environment.” These effects, like SAS, are temporary and do not appear to limit the amount of times an astronaut can venture up to orbit and return. “There’s really no evidence to say that we would know there would be a limit,” she said, envisioning it could end up being more of a personal choice in deciding if the after-effects and recovery times are worth it for your next trip to space.

Lotus unveils its first electric vehicle, the Eletre ‘Hyper-SUV’

The electric revolution is no longer limited to daily drivers and eco-commuters. Luxury brands such as Audi, BMW, Mercedes and Porsche have already begun augmenting their lineups with EV variants, while hypercar makers like Lamborghini and Ferrari expect their first electrics to arrive in the next few years. On Tuesday, British automaker Lotus announced that it too has an EV, the 600HP Eletre, with deliveries beginning next year in China, Europe and the UK.

Lotus Eletre exterior
Lotus

Developed under the codename Type 132, the Eletre “takes the heart and soul of the latest Lotus sports car – the Emira – and the revolutionary aero performance of the all-electric Evija hypercar, and reinterprets them as a Hyper-SUV,” according to the company’s press release. It also accomplishes a number of firsts, the release continued: “first five-door production car, the first model outside sports car segments, the first lifestyle EV, the most ‘connected’ Lotus ever.”

The Eletre was developed atop Lotus’ 800V Electric Premium Architecture (EPA) platform. That voltage puts it on par with the Audi e-Tron and Hyundai Ioniq 5, meaning that on a 350 kW DC fast charger, drivers will be able to add around 248 miles of range in a 20 minute charge, according to the company. Lotus hasn’t specified how big the battery will be beyond that it “has a battery capacity that’s over 100 kWh” but the company is estimating a total range of 373 miles, equivalent to that of the Tesla Model X Long Range Plus. Its dual front and rear motors will reportedly output 600 horsepower producing a top speed of 161 MPH and a sub-3 second 0-60.      

Lotus Eletre exterior
Lotus

Ben Payne led development of the Eletre’s exterior design, which features “porous” aerodynamics, a low stance atop the platform’s long wheelbase with short overhangs at either end. “The Eletre is a progressive all-electric performance vehicle embodying emotion, intelligence and prestige and, as the first of the brand’s lifestyle cars, it sets the standard for what will follow,” he said. “We have taken the iconic design language of the Lotus sports car and successfully evolved it into an elegant and exotic Hyper-SUV.”  

The interior will offer either the traditional two-buckets-and-a-bench layout or an optional four individual seats, front and rear, beneath a fixed panoramic sunroof. The material choices for the cabin reflect Lotus’ net-zero goals, with “durable man-made microfibres on the primary touchpoints, and an advanced wool-blend fabric on the seats,” while the hard parts are constructed from little bits of carbon fiber recycled from the edge of weaves rather than being made specially.      

Lotus Eletre exterior
Lotus

The infotainment system is a whole production. “Below the instrument panel a blade of light runs across the cabin, sitting in a ribbed channel that widens at each end to create the air vents,” Tuesday’s announcement read. This light blade serves as part of the vehicle’s HMI and changes color to alert occupants of important events like incoming calls. 

Below that is a 30mm tall “ribbon of technology.” On the driverside, that ribbon serves as the instrument cluster, displaying vehicle and trip information, which can also be displayed via the AR system, which comes standard. On the passenger side, a second ribbon shows relevant contextual information like the nearby points of interest or the current music selection which plays through a KEF Premium 1,380-watt 15-speaker surround sound set-up with Uni-QTM. 

Lotus Eletre exterior
Lotus

Between these two ribbons sits a 15.1-inch OLED touchscreen infotainment system that folds away when not in use. While most of the cabin controls are digital and can be used either through the touchscreen or voice interfaces, Lotus deemed some functions vital enough to warrant being mirrored to physical knobs and switches so drivers won’t have to dig through submenus to turn on the windshield wipers. Even those digital controls, Lotus boasts that “with three touches of the main screen users can access 95 percent of the car’s functionality.”

Think
Lotus

The Eletre is also the first vehicle on the market with a deployable LIDAR array. Used to supplement the driver assist system the unit pops up from the top of the windscreen, top of the rear glass and the front wheel arches — like the headlights from a 1990 MX-5 — when in use and then retracts when finished to maintain aerodynamics. 

“ADAS technologies such as LIDAR sensors and cameras will become increasingly common on new cars as we move into a more autonomous era, and to have the world’s first deployable LIDAR system on the Eletre is a signal of the technology vision we have for Lotus,” said Maximilian Szwaj, Vice President of Lotus Technology and Managing Director, LTIC. “This car has tech for today, and also for tomorrow, as it’s been developed to accept OTA updates as standard.”

Lotus Eletre exterior
Lotus

Manufacturing begins later this year at Lotus’ new production plant in Wuhan, China with deliveries beginning in 2023. The model will be available first in China, Europe and the UK. The company hasn’t disclosed pricing details yet.

Hitting the Books: The Soviets once tasked an AI with our mutually assured destruction

Barely a month into its already floundering invasion of Ukraine and Russia is rattling its nuclear saber and threatening to drastically escalate the regional conflict into all out world war. But the Russians are no stranger to nuclear brinksmanship. In the excerpt below from Ben Buchanan and Andrew Imbrie’s latest book, we can see how closely humanity came to an atomic holocaust in 1983 and why an increasing reliance on automation — on both sides of the Iron Curtain — only served to heighten the likelihood of an accidental launch. The New Fire looks at the rapidly expanding roles of automated machine learning systems in national defense and how increasingly ubiquitous AI technologies (as examined through the thematic lenses of “data, algorithms, and computing power”) are transforming how nations wage war both domestically and abroad.

The New Fire Cover
MIT Press

Excerpted from The New Fire: War, Peacem, and Democracy in the Age of AI by Andrew Imbrie and Ben Buchanan. Published by MIT Press. Copyright © 2021 by Andrew Imbrie and Ben Buchanan. All rights reserved.


THE DEAD HAND

As the tensions between the United States and the Soviet Union reached their apex in the fall of 1983, the nuclear war began. At least, that was what the alarms said at the bunker in Moscow where Lieutenant Colonel Stanislav Petrov was on duty. 

Inside the bunker, sirens blared and a screen flashed the word “launch.”A missile was inbound. Petrov, unsure if it was an error, did not respond immediately. Then the system reported two more missiles, and then two more after that. The screen now said “missile strike.” The computer reported with its highest level of confidence that a nuclear attack was underway.

The technology had done its part, and everything was now in Petrov’s hands. To report such an attack meant the beginning of nuclear war, as the Soviet Union would surely launch its own missiles in retaliation. To not report such an attack was to impede the Soviet response, surrendering the precious few minutes the country’s leadership had to react before atomic mushroom clouds burst out across the country; “every second of procrastination took away valuable time,” Petrov later said. 

“For 15 seconds, we were in a state of shock,” he recounted. He felt like he was sitting on a hot frying pan. After quickly gathering as much information as he could from other stations, he estimated there was a 50-percent chance that an attack was under way. Soviet military protocol dictated that he base his decision off the computer readouts in front of him, the ones that said an attack was undeniable. After careful deliberation, Petrov called the duty officer to break the news: the early warning system was malfunctioning. There was no attack, he said. It was a roll of the atomic dice.

Twenty-three minutes after the alarms—the time it would have taken a missile to hit Moscow—he knew that he was right and the computers were wrong. “It was such a relief,” he said later. After-action reports revealed that the sun’s glare off a passing cloud had confused the satellite warning system. Thanks to Petrov’s decisions to disregard the machine and disobey protocol, humanity lived another day.

Petrov’s actions took extraordinary judgment and courage, and it was only by sheer luck that he was the one making the decisions that night. Most of his colleagues, Petrov believed, would have begun a war. He was the only one among the officers at that duty station who had a civilian, rather than military, education and who was prepared to show more independence. “My colleagues were all professional soldiers; they were taught to give and obey orders,” he said. The human in the loop — this particular human — had made all the difference.

Petrov’s story reveals three themes: the perceived need for speed in nuclear command and control to buy time for decision makers; the allure of automation as a means of achieving that speed; and the dangerous propensity of those automated systems to fail. These three themes have been at the core of managing the fear of a nuclear attack for decades and present new risks today as nuclear and non-nuclear command, control, and communications systems become entangled with one another. 

Perhaps nothing shows the perceived need for speed and the allure of automation as much as the fact that, within two years of Petrov’s actions, the Soviets deployed a new system to increase the role of machines in nuclear brinkmanship. It was properly known as Perimeter, but most people just called it the Dead Hand, a sign of the system’s diminished role for humans. As one former Soviet colonel and veteran of the Strategic Rocket Forces put it, “The Perimeter system is very, very nice. Were move unique responsibility from high politicians and the military.” The Soviets wanted the system to partly assuage their fears of nuclear attack by ensuring that, even if a surprise strike succeeded in decapitating the country’s leadership, the Dead Hand would make sure it did not go unpunished.

The idea was simple, if harrowing: in a crisis, the Dead Hand would monitor the environment for signs that a nuclear attack had taken place, such as seismic rumbles and radiation bursts. Programmed with a series of if-then commands, the system would run through the list of indicators, looking for evidence of the apocalypse. If signs pointed to yes, the system would test the communications channels with the Soviet General Staff. If those links were active, the system would remain dormant. If the system received no word from the General Staff, it would circumvent ordinary procedures for ordering an attack. The decision to launch would thenrest in the hands of a lowly bunker officer, someone many ranks below a senior commander like Petrov, who would nonetheless find himself responsible for deciding if it was doomsday.

The United States was also drawn to automated systems. Since the 1950s, its government had maintained a network of computers to fuse incoming data streams from radar sites. This vast network, called the Semi-Automatic Ground Environment, or SAGE, was not as automated as the Dead Hand in launching retaliatory strikes, but its creation was rooted in a similar fear. Defense planners designed SAGE to gather radar information about a potential Soviet air attack and relay that information to the North American Aerospace Defense Command, which would intercept the invading planes. The cost of SAGE was more than double that of the Manhattan Project, or almost $100 billion in 2022 dollars. Each of the twenty SAGE facilities boasted two 250-ton computers, which each measured 7,500 square feet and were among the most advanced machines of the era.

If nuclear war is like a game of chicken — two nations daring each other to turn away, like two drivers barreling toward a head-on collision — automation offers the prospect of a dangerous but effective strategy. As the nuclear theorist Herman Kahn described:

The “skillful” player may get into the car quite drunk, throwing whisky bottles out the window to make it clear to everybody just how drunk he is. He wears very dark glasses so that it is obvious that he cannot see much, if anything. As soon as the car reaches high speed, he takes the steering wheel and throws it out the window. If his opponent is watching, he has won. If his opponent is not watching, he has a problem; likewise, if both players try this strategy. 

To automate nuclear reprisal is to play chicken without brakes or a steering wheel. It tells the world that no nuclear attack will go unpunished, but it greatly increases the risk of catastrophic accidents.

Automation helped enable the dangerous but seemingly predictable world of mutually assured destruction. Neither the United States nor the Soviet Union was able to launch a disarming first strike against the other; it would have been impossible for one side to fire its nuclear weapons without alerting the other side and providing at least some time to react. Even if a surprise strike were possible, it would have been impractical to amass a large enough arsenal of nuclear weapons to fully disarm the adversary by firing multiple warheads at each enemy silo, submarine, and bomber capable of launching a counterattack. Hardest of all was knowing where to fire. Submarines in the ocean, mobile ground-launched systems on land, and round-the-clock combat air patrols in the skies made the prospect of successfully executing such a first strike deeply unrealistic. Automated command and control helped ensure these units would receive orders to strike back. Retaliation was inevitable, and that made tenuous stability possible. 

Modern technology threatens to upend mutually assured destruction. When an advanced missile called a hypersonic glide vehicle nears space, for example, it separates from its booster rockets and accelerates down toward its target at five times the speed of sound. Unlike a traditional ballistic missile, the vehicle can radically alter its flight profile over longranges, evading missile defenses. In addition, its low-altitude approach renders ground-based sensors ineffective, further compressing the amount of time for decision-making. Some military planners want to use machine learning to further improve the navigation and survivability of these missiles, rendering any future defense against them even more precarious. 

Other kinds of AI might upend nuclear stability by making more plausible a first strike that thwarts retaliation. Military planners fear that machine learning and related data collection technologies could find their hidden nuclear forces more easily. For example, better machine learning–driven analysis of overhead imagery could spot mobile missile units; the United States reportedly has developed a highly classified program to use AI to track North Korean launchers. Similarly, autonomous drones under the sea might detect enemy nuclear submarines, enabling them to be neutralized before they can retaliate for an attack. More advanced cyber operations might tamper with nuclear command and control systems or fool early warning mechanisms, causing confusion in the enemy’s networks and further inhibiting a response. Such fears of what AI can do make nuclear strategy harder and riskier. 

For some, just like the Cold War strategists who deployed the expert systems in SAGE and the Dead Hand, the answer to these new fears is more automation. The commander of Russia’s Strategic Rocket Forces has said that the original Dead Hand has been improved upon and is still functioning, though he didn’t offer technical details. In the United States, some proposals call for the development of a new Dead Hand–esque system to ensure that any first strike is met with nuclear reprisal,with the goal of deterring such a strike. It is a prospect that has strategic appeal to some warriors but raises grave concern for Cassandras, whowarn of the present frailties of machine learning decision-making, and for evangelists, who do not want AI mixed up in nuclear brinkmanship.

While the evangelists’ concerns are more abstract, the Cassandras have concrete reasons for worry. Their doubts are grounded in storieslike Petrov’s, in which systems were imbued with far too much trust and only a human who chose to disobey orders saved the day. The technical failures described in chapter 4 also feed their doubts. The operational risks of deploying fallible machine learning into complex environments like nuclear strategy are vast, and the successes of machine learning in other contexts do not always apply. Just because neural networks excel at playing Go or generating seemingly authentic videos or even determining how proteins fold does not mean that they are any more suited than Petrov’s Cold War–era computer for reliably detecting nuclear strikes.In the realm of nuclear strategy, misplaced trust of machines might be deadly for civilization; it is an obvious example of how the new fire’s force could quickly burn out of control. 

Of particular concern is the challenge of balancing between false negatives and false positives—between failing to alert when an attack is under way and falsely sounding the alarm when it is not. The two kinds of failure are in tension with each other. Some analysts contend that American military planners, operating from a place of relative security,worry more about the latter. In contrast, they argue that Chinese planners are more concerned about the limits of their early warning systems,given that China possesses a nuclear arsenal that lacks the speed, quantity, and precision of American weapons. As a result, Chinese government leaders worry chiefly about being too slow to detect an attack in progress. If these leaders decided to deploy AI to avoid false negatives,they might increase the risk of false positives, with devastating nuclear consequences. 

The strategic risks brought on by AI’s new role in nuclear strategy are even more worrying. The multifaceted nature of AI blurs lines between conventional deterrence and nuclear deterrence and warps the established consensus for maintaining stability. For example, the machine learning–enabled battle networks that warriors hope might manage conventional warfare might also manage nuclear command and control. In such a situation, a nation may attack another nation’s information systems with the hope of degrading its conventional capacity and inadvertently weaken its nuclear deterrent, causing unintended instability and fear and creating incentives for the victim to retaliate with nuclear weapons. This entanglement of conventional and nuclear command-and-control systems, as well as the sensor networks that feed them, increases the risks of escalation. AI-enabled systems may like-wise falsely interpret an attack on command-and-control infrastructure as a prelude to a nuclear strike. Indeed, there is already evidence that autonomous systems perceive escalation dynamics differently from human operators. 

Another concern, almost philosophical in its nature, is that nuclear war could become even more abstract than it already is, and hence more palatable. The concern is best illustrated by an idea from Roger Fisher, a World War II pilot turned arms control advocate and negotiations expert. During the Cold War, Fisher proposed that nuclear codes be stored in a capsule surgically embedded near the heart of a military officer who would always be near the president. The officer would also carry a large butcher knife. To launch a nuclear war, the president would have to use the knife to personally kill the officer and retrieve the capsule—a comparatively small but symbolic act of violence that would make the tens of millions of deaths to come more visceral and real. 

Fisher’s Pentagon friends objected to his proposal, with one saying,“My God, that’s terrible. Having to kill someone would distort the president’s judgment. He might never push the button.” This revulsion, ofcourse, was what Fisher wanted: that, in the moment of greatest urgency and fear, humanity would have one more chance to experience—at an emotional, even irrational, level—what was about to happen, and one more chance to turn back from the brink. 

Just as Petrov’s independence prompted him to choose a different course, Fisher’s proposed symbolic killing of an innocent was meant to force one final reconsideration. Automating nuclear command and control would do the opposite, reducing everything to error-prone, stone-coldmachine calculation. If the capsule with nuclear codes were embedded near the officer’s heart, if the neural network decided the moment was right, and if it could do so, it would—without hesitation and without understanding—plunge in the knife.

Google says it thwarted North Korean cyberattacks in early 2022

Google’s Threat Analysis Group announced on Thursday that it had discovered a pair of North Korean hacking cadres going by the monikers Operation Dream Job and Operation AppleJeus in February that were leveraging a remote code execution exploit in the Chrome web browser. 

The blackhatters reportedly targeted the US news media, IT, crypto and fintech industries, with evidence of their attacks going back as far as January 4th, 2022, though the Threat Analysis Group notes that organizations outside the US could have been targets as well.

“We suspect that these groups work for the same entity with a shared supply chain, hence the use of the same exploit kit, but each operate with a different mission set and deploy different techniques,” the Google team wrote on Thursday. “It is possible that other North Korean government-backed attackers have access to the same exploit kit.”

Operation Dream Job targeted 250 people across 10 companies with fraudulent job offers from the likes of Disney and Oracle sent from accounts spoofed to look like they came from Indeed or ZipRecruiter. Clicking on the link would launch a hidden iframe that would trigger the exploit. 

Operation AppleJeus, on the other hand targeted more than 85 users in the cryptocurrency and fintech industries using the same exploit kit. That effort involved “compromising at least two legitimate fintech company websites and hosting hidden iframes to serve the exploit kit to visitors,” Google’s security researchers found. “In other cases, we observed fake websites — already set up to distribute trojanized cryptocurrency applications — hosting iframes and pointing their visitors to the exploit kit.”

“The kit initially serves some heavily obfuscated javascript used to fingerprint the target system,” the team said. “This script collected all available client information such as the user-agent, resolution, etc. and then sent it back to the exploitation server. If a set of unknown requirements were met, the client would be served a Chrome RCE exploit and some additional javascript. If the RCE was successful, the javascript would request the next stage referenced within the script as ‘SBX,’ a common acronym for Sandbox Escape.”

The Google security group discovered the activity on February 10th and had patched it by February 14th. The company has added all identified websites and domains to its Safe Browsing database as well as notified all of the targeted Gmail and Workspace users about the attempts.