Washington State legislation guaranteeing pay and benefits for ride-hail workers has become a practical reality. Reutersreports Governor Jay Inslee has signed into law a measure setting minimum pay guarantees of $1.17 per mile and 34 cents per minute, with trips costing at least $3 each. Drivers at Uber, Lyft and other companies will also have benefits like paid sick leave, access to workers’ compensation and family medical leave. They can also appeal if they believe they’ve been unfairly terminated.
The law has garnered support from both labor organizers and companies. The Washington Drivers Union billed it as an “unprecedented victory” that would reverse years of shrinking pay and improve the overall quality of life. Uber said in a statement that the law “decisively” gave drivers the mix of independence and safeguards they were asking for, while Lyft said this was a “win” that emerged when unions, politicians and companies “worked together.”
There are concerns the law strips power away, however. It declares that drivers for ride-hailing apps aren’t employees, potentially limiting access to further benefits and more consistent hours. The law also bars cities and counties from applying additional regulations beyond those in effect. Seattle will still offer higher pay ($1.38 per mile, 59 cents per minute and at least $5.17 per trip), but companies like Uber and Lyft have effectively limited the scope of regulations they might face.
This is still the first state-level law to set pay standards for gig-based rides, though. Until now, only New York City and Seattle had established minimums in the country. This could make ride-hail work viable for considerably more people, and might prompt other states to enact their own guarantees.
The initial results of a second union election at Amazon’s BHM1 warehouse in Bessemer, Alabama have finally come through. Workers have voted against unionization in a closely contested 993-875 vote (with 59 voided votes) out of 6,153 workers eligible to cast a ballot. Turnout appears to have been considerably lower this time around, as more than 3,000 employees cast ballots in the early 2021 vote. However, 416 votes have been challenged — more than enough to change the outcome — so the definitive result might not be available for some time.
While it’s not currently known how many of the challenges came from either party, in a post-tally press conference, Retail, Wholesale and Department Store Union president Stewart Applebaum said “each side challenged over 100 ballots.” Currently the NLRB has not yet scheduled the hearing to determine which of these ballots should be opened and counted, but expects that to take place in the coming weeks. Any additional unfair labor practices RWDSU wishes to lodge in regards to this re-run election will need to be filed within the next five business day.
The tally brings BHM1 to the possible end of a long and messy saga. Bessemer workers voted against unionization in early 2021, but the National Labor Relations Board ruled that Amazon violated labor laws by allegedly interfering with the vote. RWDSU accused Amazon of repeatedly trying to intimidate workers through measures like an unauthorized ballot box and anti-union campaign material. While Amazon disputed the claims, the NLRB ultimately ordered a second vote.
The rerun election didn’t go smoothly, either. The RWDSU has maintained that Amazon interfered with the second vote by removing pro-union posters, forcing attendance of anti-union meetings and limiting time spent on company grounds to discourage organization. Before the vote, the RWDSU also accused Amazon of illegal retaliation against worker Isaiah Thomas’ pro-union efforts. The company has again argued that its actions are legal.
BHM1 was the first major Amazon facility in the US to hold a union vote, but it’s no longer the only one. One Staten Island warehouse, JFK8, is already voting on possible unionization, and early vote totals show the grassroots Amazon Labor Union ahead by several hundred votes. Another facility in Staten Island is scheduled to hold its own unionization vote starting in late April. Simply put, there’s a growing desire for workers to have a say in their conditions at at Amazon’s — whether those efforts succeed, however, remains to be seen.
Additional reporting by Bryan Menegus. Updated with information from RWDSU and NLRB
The American Lung Association has released a report detailing the public health benefits of a complete national shift to zero-emission vehicles from 2020 to 2050. Apparently, if all new passenger and heavy-duty vehicles sold by 2035 and 2040, respectively, are zero-emission models, 110,000 deaths could be avoided in the United States over the next 30 years. That figure came from the association’s analysis, which also projects that the Biden administration will achieve its target of having 100 percent carbon pollution-free electricity by 2035.
With no air pollution affecting people’s health, up to 2.79 million asthma attacks could also be avoided. And perhaps to convince companies to get onboard with the transition, the association also made it a point to mention that up to 13.4 million lost workdays could be avoided with cleaner air.
Harold Wimmer, National President and CEO of the American Lung Association, said in a statement:
“Zero-emission transportation is a win-win for public health. Too many communities across the U.S. deal with high levels of dangerous pollution from nearby highways and trucking corridors, ports, warehouses and other pollution hot spots. Plus, the transportation sector is the nation’s biggest source of carbon pollution that drives climate change and associated public health harms. This is an urgent health issue for millions of people in the U.S.”
The widespread transition to zero-emission vehicles would generate up to $1.2 trillion in public health benefits, the report noted, and $1.7 trillion in climate benefits. Communities and counties with the highest percentage of lower-income families and People of Color in the US would benefit greatly from the shift, since they have areas with highly concentrated doses of pollution from diesel hotspots, power plants and other fossil fuel facilities. The top metro locations that would benefit the most include Los Angeles, New York, Chicago, San Jose, Washington, Miami, Houston, Detroit and Dallas Fort-Worth.
To be able to ensure that all new vehicles sold by 2040 are zero-emission and that the grid can supply the country with pollution-free electricity within 15 years, the association has listed a series of recommendations. They include a call for increased funding for non-combustion electricity generation and transportation, extending and expanding incentives for zero-emission vehicle purchases and “converting public fleets to zero-emission vehicles immediately.” The association is also urging the Congress to pass legislation that would accelerate the transition and for the EPA to adopt standards that would require lower carbon emissions from vehicles before the shift is complete.
Barely a month into its already floundering invasion of Ukraine and Russia is rattling its nuclear saber and threatening to drastically escalate the regional conflict into all out world war. But the Russians are no stranger to nuclear brinksmanship. In the excerpt below from Ben Buchanan and Andrew Imbrie’s latest book, we can see how closely humanity came to an atomic holocaust in 1983 and why an increasing reliance on automation — on both sides of the Iron Curtain — only served to heighten the likelihood of an accidental launch. The New Firelooks at the rapidly expanding roles of automated machine learning systems in national defense and how increasingly ubiquitous AI technologies (as examined through the thematic lenses of “data, algorithms, and computing power”) are transforming how nations wage war both domestically and abroad.
As the tensions between the United States and the Soviet Union reached their apex in the fall of 1983, the nuclear war began. At least, that was what the alarms said at the bunker in Moscow where Lieutenant Colonel Stanislav Petrov was on duty.
Inside the bunker, sirens blared and a screen flashed the word “launch.”A missile was inbound. Petrov, unsure if it was an error, did not respond immediately. Then the system reported two more missiles, and then two more after that. The screen now said “missile strike.” The computer reported with its highest level of confidence that a nuclear attack was underway.
The technology had done its part, and everything was now in Petrov’s hands. To report such an attack meant the beginning of nuclear war, as the Soviet Union would surely launch its own missiles in retaliation. To not report such an attack was to impede the Soviet response, surrendering the precious few minutes the country’s leadership had to react before atomic mushroom clouds burst out across the country; “every second of procrastination took away valuable time,” Petrov later said.
“For 15 seconds, we were in a state of shock,” he recounted. He felt like he was sitting on a hot frying pan. After quickly gathering as much information as he could from other stations, he estimated there was a 50-percent chance that an attack was under way. Soviet military protocol dictated that he base his decision off the computer readouts in front of him, the ones that said an attack was undeniable. After careful deliberation, Petrov called the duty officer to break the news: the early warning system was malfunctioning. There was no attack, he said. It was a roll of the atomic dice.
Twenty-three minutes after the alarms—the time it would have taken a missile to hit Moscow—he knew that he was right and the computers were wrong. “It was such a relief,” he said later. After-action reports revealed that the sun’s glare off a passing cloud had confused the satellite warning system. Thanks to Petrov’s decisions to disregard the machine and disobey protocol, humanity lived another day.
Petrov’s actions took extraordinary judgment and courage, and it was only by sheer luck that he was the one making the decisions that night. Most of his colleagues, Petrov believed, would have begun a war. He was the only one among the officers at that duty station who had a civilian, rather than military, education and who was prepared to show more independence. “My colleagues were all professional soldiers; they were taught to give and obey orders,” he said. The human in the loop — this particular human — had made all the difference.
Petrov’s story reveals three themes: the perceived need for speed in nuclear command and control to buy time for decision makers; the allure of automation as a means of achieving that speed; and the dangerous propensity of those automated systems to fail. These three themes have been at the core of managing the fear of a nuclear attack for decades and present new risks today as nuclear and non-nuclear command, control, and communications systems become entangled with one another.
Perhaps nothing shows the perceived need for speed and the allure of automation as much as the fact that, within two years of Petrov’s actions, the Soviets deployed a new system to increase the role of machines in nuclear brinkmanship. It was properly known as Perimeter, but most people just called it the Dead Hand, a sign of the system’s diminished role for humans. As one former Soviet colonel and veteran of the Strategic Rocket Forces put it, “The Perimeter system is very, very nice. Were move unique responsibility from high politicians and the military.” The Soviets wanted the system to partly assuage their fears of nuclear attack by ensuring that, even if a surprise strike succeeded in decapitating the country’s leadership, the Dead Hand would make sure it did not go unpunished.
The idea was simple, if harrowing: in a crisis, the Dead Hand would monitor the environment for signs that a nuclear attack had taken place, such as seismic rumbles and radiation bursts. Programmed with a series of if-then commands, the system would run through the list of indicators, looking for evidence of the apocalypse. If signs pointed to yes, the system would test the communications channels with the Soviet General Staff. If those links were active, the system would remain dormant. If the system received no word from the General Staff, it would circumvent ordinary procedures for ordering an attack. The decision to launch would thenrest in the hands of a lowly bunker officer, someone many ranks below a senior commander like Petrov, who would nonetheless find himself responsible for deciding if it was doomsday.
The United States was also drawn to automated systems. Since the 1950s, its government had maintained a network of computers to fuse incoming data streams from radar sites. This vast network, called the Semi-Automatic Ground Environment, or SAGE, was not as automated as the Dead Hand in launching retaliatory strikes, but its creation was rooted in a similar fear. Defense planners designed SAGE to gather radar information about a potential Soviet air attack and relay that information to the North American Aerospace Defense Command, which would intercept the invading planes. The cost of SAGE was more than double that of the Manhattan Project, or almost $100 billion in 2022 dollars. Each of the twenty SAGE facilities boasted two 250-ton computers, which each measured 7,500 square feet and were among the most advanced machines of the era.
If nuclear war is like a game of chicken — two nations daring each other to turn away, like two drivers barreling toward a head-on collision — automation offers the prospect of a dangerous but effective strategy. As the nuclear theorist Herman Kahn described:
The “skillful” player may get into the car quite drunk, throwing whisky bottles out the window to make it clear to everybody just how drunk he is. He wears very dark glasses so that it is obvious that he cannot see much, if anything. As soon as the car reaches high speed, he takes the steering wheel and throws it out the window. If his opponent is watching, he has won. If his opponent is not watching, he has a problem; likewise, if both players try this strategy.
To automate nuclear reprisal is to play chicken without brakes or a steering wheel. It tells the world that no nuclear attack will go unpunished, but it greatly increases the risk of catastrophic accidents.
Automation helped enable the dangerous but seemingly predictable world of mutually assured destruction. Neither the United States nor the Soviet Union was able to launch a disarming first strike against the other; it would have been impossible for one side to fire its nuclear weapons without alerting the other side and providing at least some time to react. Even if a surprise strike were possible, it would have been impractical to amass a large enough arsenal of nuclear weapons to fully disarm the adversary by firing multiple warheads at each enemy silo, submarine, and bomber capable of launching a counterattack. Hardest of all was knowing where to fire. Submarines in the ocean, mobile ground-launched systems on land, and round-the-clock combat air patrols in the skies made the prospect of successfully executing such a first strike deeply unrealistic. Automated command and control helped ensure these units would receive orders to strike back. Retaliation was inevitable, and that made tenuous stability possible.
Modern technology threatens to upend mutually assured destruction. When an advanced missile called a hypersonic glide vehicle nears space, for example, it separates from its booster rockets and accelerates down toward its target at five times the speed of sound. Unlike a traditional ballistic missile, the vehicle can radically alter its flight profile over longranges, evading missile defenses. In addition, its low-altitude approach renders ground-based sensors ineffective, further compressing the amount of time for decision-making. Some military planners want to use machine learning to further improve the navigation and survivability of these missiles, rendering any future defense against them even more precarious.
Other kinds of AI might upend nuclear stability by making more plausible a first strike that thwarts retaliation. Military planners fear that machine learning and related data collection technologies could find their hidden nuclear forces more easily. For example, better machine learning–driven analysis of overhead imagery could spot mobile missile units; the United States reportedly has developed a highly classified program to use AI to track North Korean launchers. Similarly, autonomous drones under the sea might detect enemy nuclear submarines, enabling them to be neutralized before they can retaliate for an attack. More advanced cyber operations might tamper with nuclear command and control systems or fool early warning mechanisms, causing confusion in the enemy’s networks and further inhibiting a response. Such fears of what AI can do make nuclear strategy harder and riskier.
For some, just like the Cold War strategists who deployed the expert systems in SAGE and the Dead Hand, the answer to these new fears is more automation. The commander of Russia’s Strategic Rocket Forces has said that the original Dead Hand has been improved upon and is still functioning, though he didn’t offer technical details. In the United States, some proposals call for the development of a new Dead Hand–esque system to ensure that any first strike is met with nuclear reprisal,with the goal of deterring such a strike. It is a prospect that has strategic appeal to some warriors but raises grave concern for Cassandras, whowarn of the present frailties of machine learning decision-making, and for evangelists, who do not want AI mixed up in nuclear brinkmanship.
While the evangelists’ concerns are more abstract, the Cassandras have concrete reasons for worry. Their doubts are grounded in storieslike Petrov’s, in which systems were imbued with far too much trust and only a human who chose to disobey orders saved the day. The technical failures described in chapter 4 also feed their doubts. The operational risks of deploying fallible machine learning into complex environments like nuclear strategy are vast, and the successes of machine learning in other contexts do not always apply. Just because neural networks excel at playing Go or generating seemingly authentic videos or even determining how proteins fold does not mean that they are any more suited than Petrov’s Cold War–era computer for reliably detecting nuclear strikes.In the realm of nuclear strategy, misplaced trust of machines might be deadly for civilization; it is an obvious example of how the new fire’s force could quickly burn out of control.
Of particular concern is the challenge of balancing between false negatives and false positives—between failing to alert when an attack is under way and falsely sounding the alarm when it is not. The two kinds of failure are in tension with each other. Some analysts contend that American military planners, operating from a place of relative security,worry more about the latter. In contrast, they argue that Chinese planners are more concerned about the limits of their early warning systems,given that China possesses a nuclear arsenal that lacks the speed, quantity, and precision of American weapons. As a result, Chinese government leaders worry chiefly about being too slow to detect an attack in progress. If these leaders decided to deploy AI to avoid false negatives,they might increase the risk of false positives, with devastating nuclear consequences.
The strategic risks brought on by AI’s new role in nuclear strategy are even more worrying. The multifaceted nature of AI blurs lines between conventional deterrence and nuclear deterrence and warps the established consensus for maintaining stability. For example, the machine learning–enabled battle networks that warriors hope might manage conventional warfare might also manage nuclear command and control. In such a situation, a nation may attack another nation’s information systems with the hope of degrading its conventional capacity and inadvertently weaken its nuclear deterrent, causing unintended instability and fear and creating incentives for the victim to retaliate with nuclear weapons. This entanglement of conventional and nuclear command-and-control systems, as well as the sensor networks that feed them, increases the risks of escalation. AI-enabled systems may like-wise falsely interpret an attack on command-and-control infrastructure as a prelude to a nuclear strike. Indeed, there is already evidence that autonomous systems perceive escalation dynamics differently from human operators.
Another concern, almost philosophical in its nature, is that nuclear war could become even more abstract than it already is, and hence more palatable. The concern is best illustrated by an idea from Roger Fisher, a World War II pilot turned arms control advocate and negotiations expert. During the Cold War, Fisher proposed that nuclear codes be stored in a capsule surgically embedded near the heart of a military officer who would always be near the president. The officer would also carry a large butcher knife. To launch a nuclear war, the president would have to use the knife to personally kill the officer and retrieve the capsule—a comparatively small but symbolic act of violence that would make the tens of millions of deaths to come more visceral and real.
Fisher’s Pentagon friends objected to his proposal, with one saying,“My God, that’s terrible. Having to kill someone would distort the president’s judgment. He might never push the button.” This revulsion, ofcourse, was what Fisher wanted: that, in the moment of greatest urgency and fear, humanity would have one more chance to experience—at an emotional, even irrational, level—what was about to happen, and one more chance to turn back from the brink.
Just as Petrov’s independence prompted him to choose a different course, Fisher’s proposed symbolic killing of an innocent was meant to force one final reconsideration. Automating nuclear command and control would do the opposite, reducing everything to error-prone, stone-coldmachine calculation. If the capsule with nuclear codes were embedded near the officer’s heart, if the neural network decided the moment was right, and if it could do so, it would—without hesitation and without understanding—plunge in the knife.
On Friday, the Federal Communications Commission added Russia’s Kaspersky Lab to its “Covered List,” labeling the cybersecurity firm an “unacceptable” national security risk to the US. The move marks the first time the agency has blacklisted a Russian …
The US and the European Union have struck a preliminary agreement on an updated Privacy Shield framework to re-enable the flow of data between the two regions. A previous agreement was struck down by the EU’s top court in 2020 over concerns that Europeans would not be fully protected from mass surveillance by the US.
“We have found an agreement in principle on a new framework for transatlantic data flows,” European Commission President Ursula von der Leyen said at a joint press conference with US President Joe Biden. “This will enable predictable and trustworthy data flows between the EU and US, safeguarding privacy and civil liberties.”
“Privacy and security are key elements of my digital agenda,” Biden said. “And, today, we’ve agreed to unprecedented protections for data privacy and security for our citizens. This new arrangement will enhance the Privacy Shield framework, promote growth and innovation in Europe and the United States and help companies, both small and large, compete in the digital economy.”
Pleased that we found an agreement in principle on a new framework for transatlantic data flows.
It will enable predictable and trustworthy 🇪🇺🇺🇸 data flows, balancing security, the right to privacy and data protection.
Biden added that should the new deal come into force, it will “allow the European Commission to once again authorize transatlantic data flows that help facilitate $7.1 trillion in economic relationships with the EU.” He said the US and EU reached other agreements on bolstering renewable sources of energy and reducing Europe’s reliance on fossil fuels from Russia.
The provisional deal on data privacy comes one day after the European Union reached an agreement on adopting the Digital Markets Act (DMA), legislation aimed at reining in the power of the biggest tech companies and giving smaller players more of a chance to compete. One provision could force the likes of Meta and Apple to make their messaging services interoperable with other platforms.
At a separate press conference on Friday, Margrethe Vestager, the European Commission’s executive vice president for A Europe Fit for the Digital Age, said the DMA will come into force in October.
The US Justice Department today announced indictments against four Russian government employees, who it alleges attempted a hacking campaign of the global energy sector that spanned six years and devices in roughly 135 countries. The two indictments were filed under seal last summer, and are finally being disclosed to the public.
The DOJ’s decision to release the documents may be a way to raise public awareness of the increased threat these kinds of hacks pose to US critical infrastructure in the wake of Russia’s invasion of Ukraine. State-sponsored hackers have targeted energy, nuclear, water and critical manufacturing companies for years, aiming to steal information on their control systems. Cybersecurity officials noticed a spike in Russian hacking activity in the US in recent weeks.
“Russian state-sponsored hackers pose a serious and persistent threat to critical infrastructure both in the United States and around the world,” said Deputy Attorney General Lisa O. Monaco in a statement. “Although the criminal charges unsealed today reflect past activity, they make crystal clear the urgent ongoing need for American businesses to harden their defenses and remain vigilant.
The indictments allege that two separate campaigns occurred between 2012 and 2018. The first one, filed in June 2021, involves Evgeny Viktorovich Gladkikh, a computer programmer at the Russian Ministry of Defense. It alleges that Gladkik and a team of co-conspirators were members of the Triton malware hacking group, which launched a failed campaign to bomb a Saudi petrochemical plant in 2017. As TechCrunchnoted, the Saudi plant would have been completely decimated if not for a bug in the code. In 2018, the same group attempted to hack US power plants but failed.
The second indictment charges three hackers who work for Russia’s intelligence agency, the Federal Security Service (FSB), as being the members of the hacking group Dragonfly, which coordinated multiple attacks on nuclear power plants, energy companies, and other critical infrastructure. It alleges that the three men, Pavel Aleksandrovich Akulov, Mikhail Mikhailovich Gavrilov and Marat Valeryevich Tyukov engaged in multiple computer intrusions between 2012 and 2017. The DOJ estimates that the three hackers were able to install malware on more than 17,000 unique devices in the US and abroad.
A second phase known as Dragonfly 2.0, which occurred between 2014 and 2017, targeted more than 3,300 users across 500 different energy companies in the US and abroad. According to the DOJ, the conspirators were looking to access the software and hardware in power plants that would allow the Russian government to trigger a shutdown.
The US government is still looking for the three FSB hackers. The State Department today announced a $10 million award for any information on their whereabouts. However, as the Washington Postnotes, the US and Russia do not have an extradition treaty, so the likeliness of any of the alleged hackers being brought to trial by these indictments is slim.
Despite the pandemic shuttering offices and upending commutes across the nation for more than two years, America’s roads and bridges remain critical to its economic and social well being, acting as a circulatory system for goods and people. But like the ticker found in your average American, our transportation system could stand more routine checkups and maybe a few repavings if it wants to still be around in another four decades. The guy whose job it is to make sure that happens, US Secretary of Transportation Pete Buttigeig, took to the SXSW stage at the Austin Convention Center last week to discuss the challenges that his administration faces.
The Secretary’s hour-long town hall presentation touched on a wide range of subjects beginning with the projects his agency plans to focus on thanks to the recent passage of a $1.2 trillion infrastructure package, roughly half of which is earmarked for transportation programs. “There are five things that we’re really focused on,” Secretary Buttigeig said. “Safety, economic development, climate, equity and transformation.
“It’s the reason the department exists,” he continued. “We have a Department of Transportation, first and foremost, to make sure everybody can get to where they need to go safely.”
But despite his agency’s efforts, the Secretary noted that some 38,000 Americans died on the road last year, compared to air travel where, “it’s not unusual to have a year where there are zero deaths in commercial aviation in the United States… I don’t believe it has to be that way.”
These investments will also help position the country to better compete economically. He points to China, which has invested extensively in its infrastructure for decades, “because of how important it is for their economic future,” he said. “This is what countries do. This is what the United States, historically, has done except we sort of skipped about 40 years.”
We need not look further than the collapse of Pittsburgh’s Forbes Avenue bridge in January to see the impacts of nearly half a century of investment austerity upon the nation’s roadways. Hours before President Biden was scheduled to speak in the city, promoting his infrastructure plan no less, when the elevated span fell, sending ten people to the hospital with non-life-threatening injuries and highlighting Pennsylvania’s ongoing struggles to ensure the proper upkeep of its nearly 500 bridges.
Ensuring the safe operation of transportation also promotes economic development, Buttigeig argued, “so we’re going to make sure that we drive economic opportunity through great transportation, both in the installation of electric chargers and the laying of track.”
Tempering the capitalist urges that a functional transportation network seems to rouse are the agency’s climate goals. “Every transportation decision is a climate decision, whether we recognize it or not,” Buttigeig said, noting that the transportation sector is the US economy’s second leading source of greenhouse gas, behind the energy sector. “Not only do we have to cut emissions from transportation on our roads by making it so that you don’t have to drag two tons of metal along to get to where you need to go all the time, we’ve got to prepare for the climate impacts that are already happening.”
Secretary Buttigeig also touched on how to most equitably distribute the benefits from those mitigation efforts and the incoming investment funds. “Infrastructure can and should connect, but sometimes it divides,” Buttigeig said, referencing the nation’s historical red-lining practices and “urban renewal” projects that tore apart black communities for generations.
“We have a responsibility to make sure that doesn’t happen this time around, and to make sure that the jobs that are going to be created, are available to everybody,” he continued. “Including fields that have been traditionally very male, or very white, but could be open to everybody. A lot of great pathways in the middle class, through these kinds of construction and infrastructure jobs that are being created.”
Looking ahead, “I will say that I think the 2020s will probably be one of the most transformative periods we’ve ever seen in transportation,” Buttigeig told the SXSW audience, nodding to recent advances in EVs, automation, UAVs and private space flight. “These things are happening, they’re upon us, and we have an opportunity to prepare the way to make sure that the development of these innovations benefits us in terms of public policy goals.”
But for the Transportation Secretary’s excitement at these future prospects, he had no misconceptions about how long it will likely take to achieve them. “I get a lot of interviews where the first question is, ‘all right, what are we going to see this summer,’” he said. “I will say, you will see more construction starting to happen as early as this summer in some places as a result of this bill.”
This is not a 2009 economic stimulus-style plan where “the idea was to get as much money pumped into our economy as possible to stimulate demand and deal with high unemployment,” he said. “This is a very different economic reality right now. And there’s a very different purpose behind this bill. It’s not about short-term stimulus. This is about getting ready for the long term.”
A bipartisan group of legislators today introduced bills in the House and Senate that would expand transparency requirements when it comes to government surveillance of US citizens, adding email, text, location and cloud data to the existing reporting framework. Currently, the US government is required to alert Americans who have been targeted by wiretaps and bank record subpoenas, but this doesn’t apply to digital or cloud data. The Government Surveillance Transparency Act aims to adjust the parameters of this rule, expanding it to cover more common, modern forms of digital communication and data storage.
The Senate bill is sponsored by Oregon Democrat Ron Wyden, Montana Republican Steve Daines, New Jersey Democrat Cory Booker and Utah Republican Mike Lee, while a companion bill in the House of Representatives is backed by California Democrat Ted Lieu and Ohio Republican Warren Davidson. They argue that hundreds of thousands of criminal surveillance orders from US authorities go unreported each year, keeping Americans in the dark about the broad scope of government monitoring programs.
The bill also addresses the government’s use of gag orders to halt technology companies from informing their customers of surveillance campaigns. While many tech companies have tried voluntarily reporting government subpoenas and data requests to their customers, authorities have used gag orders to keep these campaigns secret, according to the legislators.
“When the government obtains someone’s emails or other digital information, users have a right to know,” Wyden said in a press release. “Our bill ensures that no investigation will be compromised, but makes sure the government can’t hide surveillance forever by misusing sealing and gag orders to prevent the American people from understanding the enormous scale of government surveillance, as well as ensuring that the targets eventually learn their personal information has been searched.”
Alongside reforms to notification requirements and the gag-order process, the legislation would force authorities to publish online general information about every surveillance order they complete. It also would require law enforcement to notify the courts if they search the wrong person, house or device in the scope of an investigation, and also if a company shares unauthorized information.
Public companies would be required to disclose greenhouse gas emissions they produce under new rules proposed by the US Securities and Exchange Commission. The move is part of the Biden government’s push to identify climate risks and cut emissions as much as 52 percent by 2030. The SEC’s three Democratic commissioners voted to approve the proposal, while Republican commissioner Hester M. Peirce voted against it.
“I am pleased to support today’s proposal because, if adopted, it would provide investors with consistent, comparable, and decision-useful information for making their investment decisions, and it would provide consistent and clear reporting obligations for issuers,” said SEC Chair Gary Gensler.
Under the new rule, companies would need to explain how climate risks would affect their operations and strategies. They’d be required to share the emissions they generate and larger companies would need to have those numbers confirmed by independent consulting firms. They’d also need to disclose indirect emissions generated by supplies and customers if those are “material” to their climate goals.
The SEC proposed rule changes that would require registrants to include certain climate-related disclosures in their registration statements and periodic reports.
— U.S. Securities and Exchange Commission (@SECGov) March 21, 2022
In addition, any companies that have made public promises to reduce their carbon footprint would need to explain how they plan to meet those goals. That includes the use of carbon offsets like planting trees, which have been criticized as being a poor substitute for actually slashing emissions, as Greenpeace said in a recent report.
The SEC already allows for voluntary emissions guidance, but the new rules would make it mandatory. Many companies like Ford already share emissions date from factory production as well as vehicle fuel usage. However, “there are lots of companies that won’t do it unless it’s mandatory,” task force chief Mary Schapiro told The Washington Post ahead of the report’s release.
After the proposed rule is published on the SEC’s website, the public will have 60 days to comment. The final rule will likely head to a vote in several months, and would be phased in over several years. The ruling will likely be challenged in court by Republicans in states like West Virginia, along with business groups, on the grounds that climate change is not a material issue for investors in the near future.
However, experts have warned that time is of the essence. The Intergovernmental Panel on Climate Change (IPCC) recently issued a report stating that many of the impacts of global warming are “irreversible” and that there’s only a brief window of time to avoid the worst. UN Secretary General Antonio Guterres called it a “damning indictment of failed climate leadership.”