ESPN will broadcast NBA action tonight with game-like volumetric video

TV broadcasters are trying all sorts of new tactics to spice up live coverage, including some truly wild things for sports. The NFL made games kid friendly with Nickelodeon-style slime cannons, for example. For tonight’s NBA matchup between the Mavericks and Nets, ESPN is trying something with more universal appeal. The network says that for the first time ever, 3D volumetric video will be used for a live full-game broadcast. 

The project is the result of a collaboration between ESPN Edge, Disney Media & Entertainment Distribution (DMED) Technology teams, the NBA and Canon. The experimental setup uses Canon’s Free-Viewpoint Video (FVV) system with over 100 data capture cameras positioned around the basketball court. The result is a live sports broadcast merged with multi-dimensional footage — something that looks very much like you’re watching a real-life video game. 

While ESPN says this is the first time the technology has been used for a full live production of a sporting event, it has been used before. With their “Netaverse,” the Brooklyn Nets — in collaboration with the NBA, Canon and the YES Network — have used the dimensional footage for replay clips and other post-production content. The Nets are also the first team from any of the four major US pro leagues to utilize the system, first capturing game action with it in mid-January. The clips you see here are from early use of the system, but ESPN said it worked with DMED Technology to build on top of what Canon, the NBA, the Nets and YES had done, making several enhancements so it worked better for live games. The still image above doesn’t really do this justice, so you really need to see the video clips, even in their early form, to get a real sense of what this looks like.

Six separate feeds are sent to ESPN’s control room in Bristol, CT, essentially offering six virtual cameras that are each able to move in three dimensional space to any spot on or around the court. Each feed has a dedicated “camera” operator who controls the view. The alternate broadcast will also have its own production team, as well has dedicated commentators, piping in the natural arena audio from Barclays Center in Brooklyn. ESPN says the broadcast isn’t totally reliant on volumetric video as it can integrate traditional cameras, replays and other content into the 3D environment via a rendered version of the jumbotron. 

Last April, ESPN offered an alternate Marvel-themed “Arena of Heroes” broadcast during an NBA game. While that bent more towards the cartoony aspect of video games, tonight’s effort is more about showing the action with a immersive dimensional quality. The network says the experiment shows new ways emerging technology can be used to offer something beyond what we’re used to seeing on TV, expanding what’s possible for production in the future. 

The alternate broadcast will be available on ESPN+ and ESPNEWS when the Mavericks and Nets tip off at 7:30PM ET tonight. 

Tesla halts work at Shanghai factory amid coronavirus outbreak

SHANGHAI (Reuters) – Tesla is suspending production at its Shanghai factory for two days, according to a notice sent internally and to suppliers, as China tightens COVID restrictions to curb the country’s latest outbreak. 

The Shanghai factory runs around the clock, and suppliers and Tesla staff were told on Wednesday in the notice, reviewed by Reuters, that production would be suspended for Wednesday and Thursday. 

It did not give a reason for the stoppage at the plant, also known as the Gigafactory 3, which makes the Tesla Model 3 sedan and the Model Y crossover sport utility vehicle. 

Many cities across China, including Shanghai, have been rolling out strict movement controls to stem the country’s largest COVID-19 outbreak in two years. The measures have also caused factory shutdowns in parts of the country, putting pressure on supply chains. 

Tesla did not have immediate comment. 

Its Shanghai factory produces cars for the China market and is also a crucial export hub to Germany and Japan. It delivered 56,515 vehicles in February, including 33,315 for export, according to the China Passenger Car Association. 

That amounts to an average of around 2,018 vehicles a day. 

It was not immediately clear whether the suspension of work would apply to other plant operations over the two days. 

Two people briefed on the notice said they understood it applied to Tesla’s general assembly lines. They declined to be identified because the information was not public. 

The notice did not specify whether the measures would correspond to a loss of production, or whether Tesla could make up for any lost output. 

Authorities in Shanghai have asked many residents not to leave their homes or work places for 48 hours to as long as 14 days as they conduct COVID tests or carry out contact tracing. 

In a separate notice issued on Wednesday that was also seen by Reuters, Tesla asked suppliers to estimate how many workers were needed to achieve full production and to provide details of workers affected by COVID restrictions. 

It also asked suppliers to prepare workers to live, sleep and eat at the factories in an arrangement similar to China’s “closed-loop management” process. Apple supplier Foxconn was allowed to resume some operations at its Shenzhen campus on Wednesday after it set up such an arrangement. 

Tesla was alerted by one supplier last weekend that its production had been affected by COVID measures, said a person familiar with the matter. That supplier told Tesla that its stockpiles could only last for two days, the person said. 

Any protracted China lockdowns will further rattle Asian supply chains, OCBC economist Wellian Wiranto said in a research note, noting the southern manufacturing hub of Shenzhen alone produces 11% of China’s exports. 

  (Reporting by Zhang Yan and Brenda Goh; Editing by Kenneth Maxwell and Kim Coghill) 

You can now draft an email in Google Docs and send it to Gmail

Google might come to your rescue the next time you need to write a carefully-worded email. The company is rolling out a Google Docs update that lets Workspace and legacy G Suite users collaborate on Gmail drafts. Open the email draft template (Insert > Building Blocks > Email draft) and your colleagues can comment or make suggestions. You won’t always need to know recipients’ email addresses, either, as you can mention people by name.

When you’re ready to send the email, you just need to click a button to open a Gmail compose window and finalize the message. Docs will automatically populate all the relevant fields.

The feature will take up to 15 days to reach companies on Rapid Release domains, and will start reaching more cautious Scheduled Release customers on March 22nd. There’s no mention of availability for personal use. At work, however, this could prove very handy — lawyers could use it to produce an airtight email to a client, while marketers might work together on their ideal sales pitch.

Cornell researchers taught a robot to take Airbnb photos

Aesthetics is what happens when our brains interact with content and go, “ooh pretty, give me more of that please.” Whether it’s a starry night or The Starry Night, the sound of a scenic seashore or the latest single from Megan Thee Stallion, understanding how the sensory experiences that scintillate us most deeply do so has spawned an entire branch of philosophy studying art, in all its forms, as well as how it is devised, produced and consumed. While what constitutes “good” art varies between people as much as what constitutes porn, the appreciation of life’s finer things is an intrinsically human endeavor (sorry, Suda) — or at least it was until we taught computers how to do it too.

The study of computational aesthetics seeks to quantify beauty as expressed in human creative endeavors, essentially using mathematical formulas and machine learning algorithms to appraise a specific piece based on existing criteria, reaching (hopefully) an equivalent opinion to that of a human performing the same inspection. This field was founded in the early 1930s when American mathematician George David Birkhoff devised his theory of aesthetics, M=O/C, where M is the aesthetic measure (think, a numerical score), O is order and C is complexity. Under this metric simple, orderly pieces would be ranked higher — i.e. be more aesthetically pleasing — than complex and chaotic scenes.

German philosopher Max Bense and French engineer Abraham Moles both, and independently, formalized Birkoff’s initial works into a reliable scientific method for gauging aesthetics in the 1950s. By the ’90s, the International Society for Mathematical and Computational Aesthetics had been founded and, over the past 30 years, the field has further evolved, spreading into AI and computer graphics, with an ultimate goal of developing computational systems capable of judging art with the same objectivity and sensitivity as humans, if not superior sensibilities. As such, these computer vision systems have found use in augmenting human appraisers’ judgements and automating rote image analysis similar to what we’re seeing in medical diagnostics, as well as grading video and photographs to help amateur shutterbugs improve their craft.

Recently, a team of researchers from Cornell University took a state of the art computational aesthetic system one step further, enabling the AI to not only determine the most pleasing picture in a given dataset, but capture new, original — and most importantly, good — shots on its own. They’ve dubbed it, AutoPhoto, its study was presented last fall at the International Conference on Intelligent Robots and Systems. This robo-photographer consists of three parts: the image evaluation algorithm, which evaluates a presented image and issues an aesthetic score; a Clearpath Jackal wheeled robot upon which the camera is affixed; and the AutoPhoto algorithm itself, which serves as a sort of firmware, translating the results from the image grading process into drive commands for the physical robot and effectively automating the optimized image capture process.

For its image evaluation algorithm, the Cornell team led by second year Masters student Hadi AlZayer, leveraged an existing learned aesthetic estimation model, which had been trained on a dataset of more than a million human-ranked photographs. AutoPhoto itself was virtually trained on dozens of 3D images of interior room scenes to spot the optimally composed angle before the team attached it to the Jackal.

When let loose in a building on campus, as you can see in the video above, the robot starts off with a slew of bad takes, but as the AutoPhoto algorithm gains its bearings, its shot selection steadily improves until the images rival those of local Zillow listings. On average it took about a dozen iterations to optimize each shot and the whole process takes just a few minutes to complete.

“You can essentially take incremental improvements to the current commands,” AlZayer told Engadget. “You can do it one step at a time, meaning you can formulate it as a reinforcement learning problem.” This way, the algorithm doesn’t have to conform to traditional heuristics like the rule of thirds because it already knows what people will like as it was taught to match the look and feel of the shots it takes with the highest-ranked pictures from its training data, AlZayer explained.

“The most challenging part was the fact there was no existing baseline number we were trying to improve,” AlZayer noted to the Cornell Press. “We had to define the entire process and the problem.”

Looking ahead, AlZayer hopes to adapt the AutoPhoto system for outdoor use, potentially swapping out the terrestrial Jackal for a UAV. “Simulating high quality realistic outdoor scenes is very hard,” AlZayer said, “just because it’s harder to perform reconstruction of a controlled scene.” To get around that issue, he and his team are currently investigating whether the AutoPhoto model can be trained on video or still images rather than 3D scenes.

ESPN’s iOS app adds SharePlay to help you watch sports with friends

You won’t have to invite friends over to share an ESPN sports stream. The network has added SharePlay support to the ESPN app for iOS and iPadOS, letting US viewers watch live and on-demand programming with up to 31 other people. Everyone watching will need either ESPN (via TV Everywhere) or ESPN+ access, but it might be worthwhile to share an exciting shot or questionable referee call in the heat of the moment.

As with SharePlay in other apps, the functionality requires at least iOS 15.1 or iPad OS 15.1. You’ll have to wait until an Apple TV update sometime later in March to use the feature on the big screen in tandem with an iPhone or iPad.

ESPN is relatively late to SharePlay when some services have had the feature since late 2021. Its sibling service Disney+ has had group viewing (albeit using a custom approach) since 2020. This may be one of the more important implementations, however. Live sports are a huge draw for co-viewing features like this, and ESPN’s large audience might introduce SharePlay to many people who otherwise wouldn’t realize it existed.

Instagram is getting ‘parental supervision’ features

Meta is introducing new “parental supervision” features for Instagram and virtual reality. The update will be available first for Instagram, which has faced a wave of scrutiny for its impact on teens and children, with new parental controls coming to Q…

写真で見るiPad Air 第5世代 実機最速インプレ

iPad Airが第5世代に置き換わります。第4世代からのスペック差分は、プロセッサーがM1になったことと、セルラーモデルが5G対応になったこと。さらにインカメラがセンターフレームに対応したほか、RAM容量やUSB-Cの転送速度なども向上しています。…