Cornell researchers taught a robot to take Airbnb photos

Aesthetics is what happens when our brains interact with content and go, “ooh pretty, give me more of that please.” Whether it’s a starry night or The Starry Night, the sound of a scenic seashore or the latest single from Megan Thee Stallion, understanding how the sensory experiences that scintillate us most deeply do so has spawned an entire branch of philosophy studying art, in all its forms, as well as how it is devised, produced and consumed. While what constitutes “good” art varies between people as much as what constitutes porn, the appreciation of life’s finer things is an intrinsically human endeavor (sorry, Suda) — or at least it was until we taught computers how to do it too.

The study of computational aesthetics seeks to quantify beauty as expressed in human creative endeavors, essentially using mathematical formulas and machine learning algorithms to appraise a specific piece based on existing criteria, reaching (hopefully) an equivalent opinion to that of a human performing the same inspection. This field was founded in the early 1930s when American mathematician George David Birkhoff devised his theory of aesthetics, M=O/C, where M is the aesthetic measure (think, a numerical score), O is order and C is complexity. Under this metric simple, orderly pieces would be ranked higher — i.e. be more aesthetically pleasing — than complex and chaotic scenes.

German philosopher Max Bense and French engineer Abraham Moles both, and independently, formalized Birkoff’s initial works into a reliable scientific method for gauging aesthetics in the 1950s. By the ’90s, the International Society for Mathematical and Computational Aesthetics had been founded and, over the past 30 years, the field has further evolved, spreading into AI and computer graphics, with an ultimate goal of developing computational systems capable of judging art with the same objectivity and sensitivity as humans, if not superior sensibilities. As such, these computer vision systems have found use in augmenting human appraisers’ judgements and automating rote image analysis similar to what we’re seeing in medical diagnostics, as well as grading video and photographs to help amateur shutterbugs improve their craft.

Recently, a team of researchers from Cornell University took a state of the art computational aesthetic system one step further, enabling the AI to not only determine the most pleasing picture in a given dataset, but capture new, original — and most importantly, good — shots on its own. They’ve dubbed it, AutoPhoto, its study was presented last fall at the International Conference on Intelligent Robots and Systems. This robo-photographer consists of three parts: the image evaluation algorithm, which evaluates a presented image and issues an aesthetic score; a Clearpath Jackal wheeled robot upon which the camera is affixed; and the AutoPhoto algorithm itself, which serves as a sort of firmware, translating the results from the image grading process into drive commands for the physical robot and effectively automating the optimized image capture process.

For its image evaluation algorithm, the Cornell team led by second year Masters student Hadi AlZayer, leveraged an existing learned aesthetic estimation model, which had been trained on a dataset of more than a million human-ranked photographs. AutoPhoto itself was virtually trained on dozens of 3D images of interior room scenes to spot the optimally composed angle before the team attached it to the Jackal.

When let loose in a building on campus, as you can see in the video above, the robot starts off with a slew of bad takes, but as the AutoPhoto algorithm gains its bearings, its shot selection steadily improves until the images rival those of local Zillow listings. On average it took about a dozen iterations to optimize each shot and the whole process takes just a few minutes to complete.

“You can essentially take incremental improvements to the current commands,” AlZayer told Engadget. “You can do it one step at a time, meaning you can formulate it as a reinforcement learning problem.” This way, the algorithm doesn’t have to conform to traditional heuristics like the rule of thirds because it already knows what people will like as it was taught to match the look and feel of the shots it takes with the highest-ranked pictures from its training data, AlZayer explained.

“The most challenging part was the fact there was no existing baseline number we were trying to improve,” AlZayer noted to the Cornell Press. “We had to define the entire process and the problem.”

Looking ahead, AlZayer hopes to adapt the AutoPhoto system for outdoor use, potentially swapping out the terrestrial Jackal for a UAV. “Simulating high quality realistic outdoor scenes is very hard,” AlZayer said, “just because it’s harder to perform reconstruction of a controlled scene.” To get around that issue, he and his team are currently investigating whether the AutoPhoto model can be trained on video or still images rather than 3D scenes.

Cadillac will offer two new features to select Super Cruise drivers this summer

GM’s Super Cruise will learn two new tricks this summer. GM announced on Tuesday that the driver assist system will offer Automatic Lane Change and Trailering capabilities for eligible owners.

Owners of the 2021 Cadillac CT4 and CT5 will have the opportunity to purchase Automatic Lane Change capabilities while 2021 Escalade owners will be given the Trailering option, which allows the SUV to tow without touching the steering wheel. The company estimates that some 12,000 CT4s, 5s, and Escalades will be eligible for the paid updates. “Eligible customers will receive communication from Cadillac about pricing, and how they can purchase and install these new upgrades in the near future,” a GM spokesperson told Engadget via email.

Super Cruise, which can be found on a variety of GM products including the new Hummer EV is a Level 2 system, in that it is a driver assist and not fully-autonomous. It relies on a mixture of LiDAR mapping, GPS, visual cameras and radar sensors to navigate traffic. GM originally announced these new features back in 2020, however the COVID pandemic and a global processor shortage have hampered their rollout

History Channel will tell the tale of the Hummer EV with a documentary

If you ever wondered how General Motors, one of the biggest automakers on the planet, went from 0 to EV so quickly while managing to reinvent its iconic Hummer SUV, former-poster child of automotive excess, as a future-facing electric vehicle, the History Channel has a show for you. Revolution: GMC Hummer EV will take a behind-the-scenes look at the development of the all-electric supertruck when it premieres Sunday, March 27th at 11am ET. 

“Our goal was to upend what an electric vehicle is capable of and push the boundaries from 100 years of vehicle development experience,” Executive Chief Engineer for the Hummer EV, Josh Tavel, said in a press statement. “This documentary captures the soul of a team capable of incredible innovation and resilience. Their learnings are laying the foundation of vehicle development for decades to come.”

The hour-long documentary, produced by Hiatus and Detroit-based WTP Pictures and directed by Sean King O’Grady, followed the Hummer development team over the course of two years of design at the Global Technical Center in Warren, Michigan followed by grueling environmental testing at GM’s proving grounds in Milford and Yuma.

If you miss the live premiere, Revolution will hit History on Hulu, History.com and the GMC YouTube page the following Sunday, April 3rd.  

Hitting the Books: How Ronald Reagan torpedoed sensible drug patenting

Americans pay two and a half times more for their prescription drugs than residents of any other nation on Earth. Though generic versions of popular compounds accounted for 84 percent of America’s annual sales volume in 2021, they only generated 12 percent of the actual dollars spent. The rest of the money pays for branded drugs — Lipitor, Zestril, Accuneb, Vicodin, Prozac — and we have the Reagan Administration in part to thank for that. In the excerpt below from Owning the Sun: A People’s History of Monopoly Medicine from Aspirin to COVID-19 Vaccines, a fascinating look at the long, infuriating history of public research being exploited for private profit, author Alexander Zaitchik recounts former President Reagan’s court-packing antics from the early 1980s that helped cement lucrative monopolies on name-brand drugs.

Owning the Sun cover
Counterpoint Press

Copyright © 2022 by Alexander Zaitchik, from Owning the Sun: A People’s History of Monopoly Medicine from Aspirin to COVID-19 Vaccines. Reprinted by permission of Counterpoint Press.


When Estes Kefauver died in 1963, he was writing a book about monopoly power called In a Few Hands. Early into Reagan’s first term, the industry must have been tempted to publish a gloating retort titled In a Few Years. Between 1979 and 1981, the drug companies did more than break the stalemate of the 1960s and ’70s — they smashed it wide open. Stevenson-Wydler and Bayh-Dole replaced the Kennedy policy with a functioning framework for the high-speed transfer of public science into private hands. As the full machinery was built out, the industry-funded echo chamber piped a constant flow of memes into the culture: patents alone drive innovation… R&D requires monopoly pricing… progress and American competitiveness depend on it… there is no other way…

In December 1981, the drug companies celebrated another long-sought victory when Congress created a federal court devoted to settling patent disputes. Previously, patent disputes were heard in the districts where they originated. The problem, from industry’s perspective, was the presence of so many staunch New Deal judges in key regions like New York’s Second Circuit. These lifetime judges often understood patent challenges not as threats to property rights, but as opportunities to enforce antitrust law. Local circuit judges appointed by Republicans could also be dangerously old-fashioned in their interpretations of the “novelty” standard. By contrast, the judges on the new patent court, named the Court of Appeals for the Federal Circuit, were appointed by the president. Reagan stuffed its bench with corporate patent lawyers and conservative legal scholars influenced by the Johnny Appleseed of the Law and Economics movement, Robert Bork. Prior to 1982, federal district judges rejected around two-thirds of patent claims; the Court of Appeals has since decided two-thirds of all cases in favor of patent claims. Reagan’s first appointee, Pauline Newman, was the former lead patent counsel for the chemical firm FMC.

The Supreme Court also contributed to the industry’s 1979–1981 run of wins. When Reagan entered office, one of the great scientific-legal unknowns involved the patentability of modified genes. Similar to the uncertainty around the postwar antibiotics market—settled in the industry’s favor by the 1952 Patents Act — the uncertainty threatened the monopoly dreams of the emergent biotechnology sector. The U.S. Patent Office was against patenting modified genes. In 1979, its officers twice rejected an attempt by a General Electric microbiologist to patent a modified bacterium invented to assist in oil spill cleanups. The GE scientist, Ananda Chakrabarty, sued the Patent Office, and in the winter of 1980 Diamond v. Chakrabarty landed before the Supreme Court. In a 5–4 decision written by Warren Burger, the Court overruled the U.S. Patent Office and ruled that modified genes were patentable, as was “anything under the sun that is made by man.” The decision was greeted with audible exhales by the players in the Bayh-Dole alliance. “Chakrabarty was the game changer that provided academic entrepreneurs and venture capitalists the protection they were waiting for,” says economist Öner Tulum. “It paved the way for a more expansive commercialization of science.”

But the industry knew better than to relax. It understood that political victories could be impermanent and fragile, and it had the scar tissue to prove it. Uniquely profitable, uniquely hated, and thus uniquely vulnerable — the companies could not afford to forget that their fantastic postwar wealth and power depended on the maintenance of artificial monopolies resting on dubious if not indefensible ethical and economic arguments that were rejected by every other country on earth. In the United States, home to their biggest profit margins, danger lurked behind every corner in the form of the next crusading senator eager to train years of unwanted attention on these facts. Not even Bayh-Dole, that precious newborn legislation, could be taken for granted. This mode of permanent crisis was validated by the return of a familiar menace in the early 1980s. Of all things, it was the generics industry, an old but weak enemy of the patent-based drug companies, that reappeared and threatened to ruin their celebration of achieving dominance over every corner of medical research and the billions of public dollars flowing through it.

***

As late as the 1930s, there was no “generic” drug industry to speak of. There were only big drug companies and small ones, some with stature, others obscure. They both sold products that were, in the parlance of ethical medicine, “nonproprietary.” To be listed in the United States Pharmacopeia and National Formulary, the official bibles of prescribable medicines, drugs could only carry scientific names; the essential properties of a good scientific name, according to the first edition of the Pharmacopeia, were “expressiveness, brevity, and dissimilarity.” The naming of drugs and medicines formed the other half of the patent taboo: branding a drug evidenced the same knavishness and greed as monopolizing one. The rules of “ethical marketing” did permit products to include an institutional affiliation—Parke-Davis Cannabis Indica Extract, or Squibb Digitalis Tincture—but the names of the medicines themselves (cannabis, digitalis) did not vary. “The generic name emerged as a parallel form of social property belonging to all that resisted commodification and thereby came to occupy a central place in debates about monopoly rights,” writes Joseph Gabriel.

As with patents on scientific medicine, the Germans gave the U.S. drug industry early instruction in the use of trademarks to entrench market control. Hoechst and Bayer broke every rule of so-called ethical marketing, aggressively advertising their breakthrough drugs under trademarks like Aspirin, Heroin, and Novocain. The idea was to twine these names and the things they described in the public mind so tightly, the brand name would secure a de facto monopoly long after the patent expired.

The strategy worked, but the German firms did not reap the benefits. The wartime Office of Alien Property redistributed the German patents and trademarks among domestic firms who produced competing versions of aspirin, creating the first “branded generic.” During the patent taboo’s extended death rattle of the interwar years, more U.S. companies waded into the use of original trademarks to suppress competition. As they experimented with German tactics to avoid “genericide” — the loss of markets after patent expiration — they were enabled by court decisions that transformed trademarks into forms of hard property, similar to the way patents were reconceived in the 1830s.

After World War II, branding and monopoly formed the two-valve heart of a post-ethical growth strategy. The industry’s incredible postwar success — between 1939 and 1959, drug profits soared from $300 million to $2.3 billion — was fueled in large part by expanding the German playbook. While branding monopolies with trade names, the industry initiated campaigns to ruin the reputations of scientifically identical but competing products. The goal was the “scandalization” of generic drugs, writes historian Jeremy Greene. The drug companies “worked methodically to moralize and sensationalize generic dispensing as a dangerous and subversive practice. Dispensing a non-branded product in place of a brand-name product was cast as ‘counterfeiting’; the act of substituting a cheaper version of a drug at the pharmacy was described as ‘beguilement,’ ‘connivance,’ ‘misrepresentation,’ ‘fraudulent,’ ‘unethical’ and ‘immoral.’”

As with patenting, it was the drug companies that dragged organized medicine with them into the post-ethical future. As late as 1955, the AMA’s Council on Pharmacy and Chemistry maintained a ban on advertisements for branded products in its Journal. That changed the year Equanil hit the market, opening the age of branded prescription drugs as a leading source of income for medical journals and associations. “Clinical journals and newer ‘throwaway’ promotional media now teemed with advertisements for Terramycin, Premarin, and Diuril rather than oxytetracycline (Pfizer), conjugated equine estrogens (Wyeth) or chlorothiazide (Merck),” writes Greene. In 1909, only one in ten prescription drugs carried a brand name. By 1969, the ratio had flipped, with only one in ten marketed under its scientific name. In another echo of the patent controversy, the rise of marketing and branded drugs produced division and resistance. By the mid-1950s, an alliance of so-called nomenclature reformers arose to decry trademarks as unscientific handmaidens of monopoly and call for a return to the use of scientific names. These reformers — doctors, pharmacists, labor leaders — made regular appearances before the Kefauver committee beginning in 1959. Their testimony on how the industry used trademarks to suppress competition informed a section in Kefauver’s original bill requiring doctors to use scientific names in all prescriptions. The proposed law reflected the norms that reigned during ethical medicine’s heyday, and would have allowed doctors to recommend firms, but not their branded products. Like most of Kefauver’s core proposals, however, the generic clause was excised. The only trademark-related reform in the final Kefauver-Harris Amendments placed limits on companies’ ability to rebrand and market old medicines as new breakthroughs.

Volkswagen officially unveils its ID.Buzz EV, the hippie bus reborn

The Microbus is back, baby! Nearly 75 years since the first Volkswagen Type 2 rolled off its assembly line and into the annals of Americana as an icon of 1960s counterculture, VW is re-releasing the emblematic vehicle — this time as a full EV.

VW ID.Buzz interior
VW

VW executives took to the livestreaming stage on Wednesday ahead of SXSW 2022’s kickoff to debut the ID.Buzz, which will be available as both a people mover and a cargo van (dubbed the ID.Buzz Cargo) beginning this year. The ID.Buzz will appear in Europe first — arriving later in 2022 — and will be available with a number of options their American-market cousins will lack, including short-wheelbase and commercial-grade variants. There’s even a Level 4 self-driving version that will begin its Shared Riding Model pilot program in Hamburg in 2025. The American iterations will debut in 2023, Scott Keogh, CEO of VW America promised during the stream, and are slated to arrive in American showrooms in 2024.

ID Buzz California
VW

Volkswagen only had the European model to show off Wednesday, but Keogh noted that the US version would be “more stylized for the American marketplace” but has “no doubt that it will be worth the wait,” while teasing a California camper edition. The US version will have a slightly longer wheelbase and offer three rows of seating to the European version’s two. With its comparatively shorter wheelbase, the European model’s turning radius is a scant 11 meters, on par with the Ioniq 5 or the VW Golf.

ID Buzz beach
VW

The ID.Buzz is built atop VW’s modular electric drive matrix (MEB if you say it in German), and is actually the largest model to date developed for the platform. MEB is the same base Ford plans to use for one of its European market vehicles in 2023. 

The ID.buzz will come equipped with a 77-kWh battery pack (slightly smaller than the 82kWh pack in the ID.4, which is also MEB-based) with a 170 kw charging capacity powering a 150 kw rear motor. It will be capable of bidirectional charging, at least in the European model, enabling V2H (vehicle-to-home) energy transfers. 

The passenger model will seat five with 1.21 cubic meters (39.5 cubic feet) of cargo space while the Cargo will offer 3.9 cubic meters (137.7 cubic feet) by replacing the rear seats with a partition behind the front row. For the interior, VW designers took inspiration from the aesthetics of the Microbus, pulling style elements from the T1 generation of vehicle and matching seat cushions, dash panels and the door trim to the vehicle’s exterior paint color of which buyers will have their pick of seven solid-color options and four two-tone schemes (white + another color).

VW Buzz seats
ingo barenschee

The European version showcased a number of impressive autonomous driving features including Active Lane-Change Assist and Park Assist Plus as well as V2X data sharing, meaning the ID.Buzz can share road hazard information with both the enabled vehicles around it and the surrounding traffic infrastructure. OTA updates will be standard on the Buzz as well.     

ID Buzz interior
VW

The Cargo version will offer a number of customizable aspects including the choice between bench and bucket seats, as well as a tailgate vs twin swing-out rear doors vs double sliding side doors. Furthermore, VW will be offering a number of conversion options for the Buzz, which should allow service providers of all stripes to customize the vehicle to their specific needs. In terms of carrying capacity, the Cargo can haul up to 600 kg of stuff inside with another 100 kg of gear affixed to its roof. 

ID Buzz rear
VW

VW also noted during the presentation the extensive work it put into lessening environmental impacts arising from the ID.Buzz’s production. The interior upholstery is made completely animal-free — the steering wheel may be made of polyurethane, but VW executives swear that it has the same look and feel as leather. The seat covers, floor coverings and headliner are all similarly composed of recycled goods like marine plastic and old water bottles. Using these materials emits 32 percent less carbon than similar products would, according to the company. Overall, VW hopes to ​​cut its carbon emissions in Europe by 40 percent by 2030 and achieve climate neutrality as part of its Way to Zero plan by 2050.

California pilot program turns GM’s EVs into roving battery packs

While not nearly as much of a mess as Texas’ energy infrastructure, California’s power grid has seen its fair share of brownouts, rolling blackouts, and power outages caused by wildfires caused by PG&E. To help mitigate the economic impact of those disruptions, this summer General Motors and Northern California’s energy provider will team up to test out using the automaker’s electric vehicles as roving, backup battery packs for the state’s power grid. 

The pilot program announced by GM CEO Mary Barra on CNBC Tuesday morning is premised on birectional charging technology, wherein power can both flow from the grid to a vehicle (G2V charging) and from a vehicle back to the grid (V2G), allowing the vehicle to act as an on-demand power source. GM plans to offer this capability as part of its Ultium battery platform on more than a million of its EVs by 2025. Currently the Nissan Leaf and the Nissan e-NV200 offer V2G charging, though Volkswagen announced in 2021 that its ID line will offer it later this year and the the Ford F-150 Lightning will as well. 

This summer’s pilot will initially investigate, “the use of bidirectional hardware coupled with software-defined communications protocols that will enable power to flow from a charged EV into a customer’s home, automatically coordinating between the EV, home and PG&E’s electric supply,” according to a statement from the companies. Should the initial tests prove fruitful, the program will expand first to a small group of PG&E customers before scaling up to “larger customer trials” by the end of 2022.

“Imagine a future in which there’s an EV in every garage that functions as a backup power source whenever it’s needed,” GM spokesperson Rick Spina said during a press call on Monday.

“We see this expansion as being the catalyst for what could be the most transformative time for for two industries, both utilities and the auto automotive industry” PG&E spokesperson Aaron August added. “This is a huge shift in the way we’re thinking about electric vehicles, and personal vehicles overall. Really, it’s not just about getting from point A to point B anymore. It’s about getting from point A to point B with the ability to provide power.”

Technically, like from a hardware standpoint, GM vehicles can provide bidirectional charging as they are currently being sold, Spina noted during the call. The current challenge, and what this pilot program is designed to address, is developing the software and UX infrastructure needed to ensure that PG&E customers can easily use the system day-to-day. “The good news there is, it’s nothing different from what’s already industry standard for connectors, software protocols,” August said. “The industry is moving towards ISO 15118-20.”

The length of time that an EV will be able to run the household it’s tethered to will depend on a number of factors — from the size of the vehicle’s battery to the home’s power consumption to the prevailing weather — but August estimates that for an average California home using 20 kWh daily, a fully-charged Chevy Bolt would have enough juice to power the house for around 3 days. This pilot program comes as automakers and utilities alike work out how to most effectively respond to the state’s recent directive banning the sale of internal combustion vehicles starting in 2035.

Hitting the Books: How Mildred Dresselhaus’ research proved we had graphite all wrong

Mildred Dresselhaus’ life was one in defiance of odds. Growing up poor in the Bronx — and even more to her detriment, growing up a woman in the 1940s — Dresselhaus’ traditional career options were paltry. Instead, she rose to become one of the world’s preeminent experts in carbon science as well as the first female Institute Professor at MIT, where she spent 57 years of her career. She collaborated with physics luminaries like Enrico Fermi and laid the essential groundwork for future Nobel Prize winning research, directed the Office of Science at the U.S. Department of Energy and was herself awarded the National Medal of Science. 

In the excerpt below from Carbon Queen: The Remarkable Life of Nanoscience Pioneer Mildred Dresselhaus, author and Deputy Editorial Director at MIT News, Maia Weinstock, tells of the time that Dresselhaus collaborated with Iranian American physicist Ali Javan to investigate exactly how charge carriers — ie electrons — move about within a graphite matrix, research that would completely overturn the field’s understanding of how these subatomic particles operate.  

Carbon Queen Cover
MIT Press

Excerpted from Carbon Queen: The Remarkable Life of Nanoscience Pioneer Mildred Dresselhaus by Maia Weinstock. Reprinted with permission from The MIT Press. Copyright 2022.


A CRITICAL ABOUT-FACE

For anyone with a research career as long and as accomplished as that of Mildred S. Dresselhaus, there are bound to be certain papers that might get a bit lost in the corridors of the mind—papers that make only moderate strides, perhaps, or that involve relatively little effort or input (when, for example, being a minor consulting author on a paper with many coauthors). Conversely, there are always standout papers that one can never forget—for their scientific impact, for coinciding with particularly memorable periods of one’s career, or for simply being unique or beastly experiments.

Millie’s first major research publication after becoming a permanent member of the MIT faculty fell into the standout category. It was one she described time and again in recollections of her career, noting it as “an interesting story for history of science.”

The story begins with a collaboration between Millie and Iranian American physicist Ali Javan. Born in Iran to Azerbaijani parents, Javan was a talented scientist and award-winning engineer who had become well known for his invention of the gas laser. His helium-neon laser, coinvented with William Bennett Jr. when both were at Bell Labs, was an advance that made possible many of the late twentieth century’s most important technologies—from CD and DVD players to bar-code scanning systems to modern fiber optics.

After publishing a couple of papers describing her early magneto-optics research on the electronic structure of graphite, Millie was looking to delve even deeper, and Javan wanted to help. The two met during Millie’s work at Lincoln Lab; she was a huge fan, once calling him “a genius” and “an extremely creative and brilliant scientist.”

For her new work, Millie aimed to study the magnetic energy levels in graphite’s valence and conduction bands. To do this, she, Javan, and a graduate student, Paul Schroeder, employed a neon gas laser, which would provide a sharp point of light to probe their graphite samples. The laser had to be built especially for the experiment, and it took years for the fruits of their labor to mature; indeed, Millie moved from Lincoln to MIT in the middle of the work.

If the experiment had yielded only humdrum results, in line with everything the team had already known, it still would have been a path-breaking exercise because it was one of the first in which scientists used a laser to study the behavior of electrons in a magnetic field. But the results were not humdrum at all. Three years after Millie and her collaborators began their experiment, they discovered their data were telling them something that seemed impossible: the energy level spacing within graphite’s valence and conduction bands were totally off from what they expected. As Millie explained to a rapt audience at MIT two decades later, this meant that “the band structure that everybody had been using up till that point could certainly not be right, and had to be turned upside down.”

In other words, Millie and her colleagues were about to overturn a well-established scientific rule—one of the more exciting and important types of scientific discoveries one can make. Just like the landmark 1957 publication led by Chien-Shiung Wu, who overturned a long-accepted particle physics concept known as conservation of parity, upending established science requires a high degree of precision—and confidence in one’s results. Millie and her team had both.

What their data suggested was that the previously accepted placement of entities known as charge carriers within graphite’s electronic structure was actually backward. Charge carriers, which allow energy to flow through a conducting material such as graphite, are essentially just what their name suggests: something that can carry an electric charge. They are also critical for the functioning of electronic devices powered by a flow of energy.

Electrons are a well-known charge carrier; these subatomic bits carry a negative charge as they move around. Another type of charge carrier can be seen when an electron moves from one atom to another within a crystal lattice, creating something of an empty space that also carries a charge—one that’s equal in magnitude to the electron but opposite in charge. In what is essentially a lack of electrons, these positive charge carriers are known as holes.

In this simplified diagram, electrons (black dots) surround atomic nuclei in a crystal lattice. In some circumstances, electrons can break free from the lattice, leaving an empty spot or hole with a positive charge. Both electrons and holes can move about, affecting electrical conduction within the material.
MIT Press

FIGURE 6.1 In this simplified diagram, electrons (black dots) surround atomic nuclei in a crystal lattice. In some circumstances, electrons can break free from the lattice, leaving an empty spot or hole with a positive charge. Both electrons and holes can move about, affecting electrical conduction within the material.

Millie, Javan, and Schroeder discovered that scientists were using the wrong assignment of holes and electrons within the previously accepted structure of graphite: they found electrons where holes should be and vice versa. “This was pretty crazy,” Millie stated in a 2001 oral history interview. “We found that everything that had been done on the electronic structure of graphite up until that point was reversed.”

As with many other discoveries overturning conventional wisdom, acceptance of the revelation was not immediate. First, the journal to which Millie and her collaborators submitted their paper originally refused to publish it. In retelling the story, Millie often noted that one of the referees, her friend and colleague Joel McClure, privately revealed himself as a reviewer in hopes of convincing her that she was embarrassingly off-base. “He said,” Millie recalled in a 2001 interview, “‘Millie, you don’t want to publish this. We know where the electrons and holes are; how could you say that they’re backwards?’” But like all good scientists, Millie and her colleagues had checked and rechecked their results numerous times and were confident in their accuracy. And so, Millie thanked McClure and told him they were convinced they were right. “We wanted to publish, and we… would take the risk of ruining our careers,” Millie recounted in 1987.

Giving their colleagues the benefit of the doubt, McClure and the other peer reviewers approved publication of the paper despite conclusions that flew in the face of graphite’s established structure. Then a funny thing happened: bolstered by seeing these conclusions in print, other researchers emerged with previously collected data that made sense only in light of a reversed assignment of electrons and holes. “There was a whole flood of publications that supported our discovery that couldn’t be explained before,” Millie said in 2001.

Today, those who study the electronic structure of graphite do so with the understanding of charge carrier placement gleaned by Millie, Ali Javan, and Paul Schroeder (who ended up with quite a remarkable thesis based on the group’s results). For Millie, who published the work in her first year on the MIT faculty, the experiment quickly solidified her standing as an exceptional Institute researcher. While many of her most noteworthy contributions to science were yet to come, this early discovery was one she would remain proud of for the rest of her life.

ICANN says it won’t kick Russia off the internet

Even as governments and corporations around the globe squeeze the Russian economy through increasingly stringent financial sanctions for the country’s invasion of its neighbor, Ukraine, some within the aggrieved nation have sought to punish Russia further, by kicking it off the internet entirely. 

On Monday, a pair of Ukrainian officials petitioned ICANN (the Internet Corporation for Assigned Names and Numbers) as well as the Réseaux IP Européens Network Coordination Centre (RIPE NCC), to revoke the domains “.ru”, “.рф” and “.su.” They also asked that root servers in Moscow and St. Petersburg be shut down — potentially knocking websites unde those domains offline. On Thursday, ICANN responded to the request with a hard pass citing that doing so is not within the scope of ICANN’s mission and that it’s not really feasible to do in the first place.

“As you know, the Internet is a decentralized system. No one actor has the ability to control it or shut it down,” ICANN CEO Göran Marby, wrote in his response to ICANN representative for Ukraine, Andrii Nabok, and deputy prime minister and digital transformation minister, Mykhailo Fedorov, on Thursday. 

“Our mission does not extend to taking punitive actions, issuing sanctions, or restricting access against segments of the Internet — regardless of the provocations,” he continued. “Essentially, ICANN has been built to ensure that the Internet works, not for its coordination role to be used to stop it from working.”