Driving lesson

I was fired for a self-driving accident video • The Register

In short Tesla reportedly fired an employee after uploading videos to YouTube criticizing the automaker’s self-driving software.

John Bernal, a former Tesla operator working on the Autopilot platform, runs a YouTube channel under the username AI Addict. He filmed and shared several videos demonstrating the capabilities of Tesla’s Full Self-Driving (FSD) product, which is still in development.

He claims he was fired by management in February after learning he “violated Tesla policy” and that his YouTube channel was a “conflict of interest,” according to CNBC. Bernal insists that he never revealed confidential information and that his criticisms always concerned the FSD versions that had been released to public beta testers.

One of his videos shows him driving around Oakland, California, during which FSD spun his car right into the wrong lane, nearly swerved into oncoming traffic to avoid a bicyclist and drove badly in other circumstances. Another shows how FSD crashed his car into bollards near San Jose.

Tesla carefully controls its public image by giving beta FSD access to content creators who promote the software. A driver once said The register he could not tell us about it because the system is still not available. Tesla has also abandoned its public relations department and is not responding to press inquiries.

“I always care about Tesla, vehicle safety and finding and fixing bugs,” Bernal said.

Destroy your AI models and delete data

The U.S. Federal Trade Commission (FTC) is getting tougher on companies suspected of illegally collecting data, ordering them not only to delete records, but also to remove any AI models trained on the information.

The regulator has included a data destruction requirement and corresponding trained patterns in three settlements with companies over the past three years, Protocol noted.

That’s what happened to Weight Watchers this month, when it was accused of illegally hijacking data from an app aimed at encouraging young adults and children to eat healthier. The company was ordered to destroy any machine learning models that were built using the data.

The first time the FTC made this request was to Cambridge Analytica. The second time it hit Everalbum, a photo-sharing app, when the company was accused of scraping selfies without permission to create a facial recognition algorithm.

“Cambridge Analytica was a good decision, but I wasn’t sure it [the rule] was going to become a model,” commented Pam Dixon, executive director of the World Privacy Forum.

Dixon and other experts now believe the FTC will force more companies to delete all data obtained without consent as well as any models that may have been built using those samples.

GPT-3 can now edit text or code

OpenAI’s language model generally responds to input with output. Give him another prompt, he offers something else. But now users can tell GPT-3 to modify its output by modifying its prompts.

You can see how it works in a short clip below:

Instead of having to rewrite or rerun input prompts from the beginning, users can directly edit the input text to have GPT-3 modify its outputs. The ability to edit or insert new text will make it easier for developers using the GPT-3 powered Codex tool and people writing longer texts.

“Codex was our initial motivation for developing this capability, because in software development we typically add code in the middle of an existing file where the code is present before and after completion,” OpenAI explained.

The edit function is free, but the insert function will cost you. ®