All posts in “nvidia”

Mobileye CEO clowns on Nvidia for allegedly copying self-driving car safety scheme

While creating self-driving car systems, it’s natural that different companies might independently arrive at similar methods or results — but the similarities in a recent “first of its kind” Nvidia proposal to work done by Mobileye two years ago were just too much for the latter company’s CEO to take politely.

Amnon Shashua, in a blog post on parent company Intel’s news feed cheekily titled “Innovation Requires Originality, openly mocks Nvidia’s “Safety Force Field,” pointing out innumerable similarities to Mobileye’s “Responsibility Sensitive Safety” paper from 2017.

He writes:

It is clear Nvidia’s leaders have continued their pattern of imitation as their so-called “first-of-its-kind” safety concept is a close replica of the RSS model we published nearly two years ago. In our opinion, SFF is simply an inferior version of RSS dressed in green and black. To the extent there is any innovation there, it appears to be primarily of the linguistic variety.

Now, it’s worth considering the idea that the approach both seem to take is, like many in the automotive and autonomous fields and others, simply inevitable. Car makers don’t go around accusing each other of using the similar setup of four wheels and two pedals. It’s partly for this reason, and partly because the safety model works better the more cars follow it, that when Mobileye published its RSS paper, it did so publicly and invited the industry to collaborate.

Many did, and as Shashua points out, including Nvidia, at least for a short time in 2018, after which Nvidia pulled out of collaboration talks. To do so and then, a year afterwards, propose a system that is, if not identical, then at least remarkably similar, and without crediting or mentioning Mobileye is suspicious to say the least.

The (highly simplified) foundation of both is calculating a set of standard actions corresponding to laws and human behavior that plan safe maneuvers based on the car’s own physical parameters and those of nearby objects and actors. But the similarities extend beyond these basics, Shashua writes (emphasis his):

RSS defines a safe longitudinal and a safe lateral distance around the vehicle. When those safe distances are compromised, we say that the vehicle is in a Dangerous Situation and must perform a Proper Response. The specific moment when the vehicle must perform the Proper Response is called the Danger Threshold.

SFF defines identical concepts with slightly modified terminology. Safe longitudinal distance is instead called “the SFF in One Dimension;” safe lateral distance is described as “the SFF in Higher Dimensions.”  Instead of Proper Response, SFF uses “Safety Procedure.” Instead of Dangerous Situation, SFF replaces it with “Unsafe Situation.” And, just to be complete, SFF also recognizes the existence of a Danger Threshold, instead calling it a “Critical Moment.”

This is followed by numerous other close parallels, and just when you think it’s done, he includes a whole separate document (PDF) showing dozens of other cases where Nvidia seems (it’s hard to tell in some cases if you’re not closely familiar with the subject matter) to have followed Mobileye and RSS’s example over and over again.

Theoretical work like this isn’t really patentable, and patenting wouldn’t be wise anyway, since widespread adoption of the basic ideas is the most desirable outcome (as both papers emphasize). But it’s common for one R&D group to push in one direction and have others refine or create counter-approaches.

You see it in computer vision, where for example Google boffins may publish their early and interesting work, which is picked up by FAIR or Uber and improved or added to in another paper 8 months later. So it really would have been fine for Nvidia to publicly say “Mobileye proposed some stuff, that’s great but here’s our superior approach.”

Instead there is no mention of RSS at all, which is strange considering their similarity, and the only citation in the SFF whitepaper is “The Safety Force Field, Nvidia, 2017,” in which, we are informed on the very first line, “the precise math is detailed.”

Just one problem: This paper doesn’t seem to exist anywhere. It certainly was never published publicly in any journal or blog post by the company. It has no DOI number and doesn’t show up in any searches or article archives. This appears to be the first time anyone has ever cited it.

It’s not required for rival companies to be civil with each other all the time, but in the research world this will almost certainly be considered poor form by Nvidia, and that can have knock-on effects when it comes to recruiting and overall credibility.

I’ve contacted Nvidia for comment (and to ask for a copy of this mysterious paper). I’ll update this post if I hear back.

Nvidia outbids Microsoft, Intel to acquire chipmaker Mellanox for $6.9 billion

Nvidia will acquire supercomputer chipmaker Mellanox for $6.9 billion, beating out companies like Microsoft and Intel.
Nvidia will acquire supercomputer chipmaker Mellanox for $6.9 billion, beating out companies like Microsoft and Intel.

Image: LightRocket via Getty Images

Nvidia has come out on top in a bidding war for chipmaker Mellanox.

In a press release on Monday, Nvidia announced its $6.9 billion acquisition of Mellanox, an Israel and California-based networking technology and supercomputer chipmaker. The all-cash acquisition is the largest ever for Nvidia, a company best known for its graphics processors for high-performance gaming.

Mellanox’s focus is on technology for networking and data storage. The company creates InfiniBand and Ethernet products for use in the cloud and data centers as well as in the artificial intelligence sector. It boasts that its technology is used in half of the top 500 most powerful supercomputers.

“The emergence of AI and data science, as well as billions of simultaneous computer users, is fueling skyrocketing demand on the world’s datacenters,” said Nvidia founder and CEO Jensen Huang in a statement. “Addressing this demand will require holistic architectures that connect vast numbers of fast computing nodes over intelligent networking fabrics to form a giant datacenter-scale compute engine.”

Some of the industry’s biggest players were interested in acquiring Mellanox and had submitted offers before Nvidia swooped in to make a bid over the last day. Microsoft, Intel, Xilinx, and Broadcom had all emerged as potential buyers before the Nvidia deal was announced. 

Microsoft is one of Mellanox’s biggest customers, as it uses the company’s products for its Azure cloud. 

In Intel’s case, the tech giant was likely looking to “corner the market” with its bid as Engadget points out. The company develops a number of products which overlap with Mellanox’s offerings. Intel reportedly offered $6 billion for the chipmaker.

Nvidia ended up outbidding the other suitors, paying $125 per share.

“We’re excited to unite Nvidia’s accelerated computing platform with Mellanox’s world-renowned accelerated networking platform under one roof to create next-generation datacenter-scale computing solutions,” said Huang.

With the acquisition, the combination of Nvidia and Mellanox will have “every major cloud service provider and computer maker” as a customer.

Uploads%252fvideo uploaders%252fdistribution thumb%252fimage%252f90560%252f0d1341ae 4990 4945 bc95 84fbb219b5b6.jpg%252foriginal.jpg?signature=lsmw0oth fpj4yn1t tpl2edlns=&source=https%3a%2f%2fblueprint api production.s3.amazonaws

China’s social credit system won’t tell you what you can do right

For the past few years, China has been rolling out a Black Mirror Harry Potter-esque social rating policy known as the Social Credit System (SCS). Far from just a credit score in the financial sense, an SCS score can determine whether a person can buy business class tickets on trains (or take the train at all) or have access to flights. Apps are rumored to exist that would tell users whether they are standing near someone with a debt listed in the system, so … they can walk away I guess.

This is a massive undertaking, and researchers are finally starting to collect good data on the system’s operation, such as a MERICS report looking at the implementation of this complex system, which involves companies and all levels of the Chinese government. Westerners have also increasingly explored the generally positive reception of the system by Chinese citizens, which would seem at odds with typical desires for privacy.

Yet, one of the biggest and most obvious open questions is what exactly will get you rewarded or punished by the SCS? Now, we are finally starting to get answers.

In a new paper that will be presented this week at the ACM FAT* Conference on algorithmic transparency, a group of researchers investigated how positive and negative points were assessed by downloading a large corpus of hundreds of thousands of entries from the Beijing SCS website and analyzing it with content analysis machine learning tools.

They found that Beijing was remarkably clear about what will get you punished, but vague about what will get you positive points. For instance, the vast majority of the blacklist was made up by people who had failed to pay their debts, or who had committed a traffic violation. Meanwhile, the people on the redlist (the positive list) were there because they were, say, great volunteers, but with no criteria on how to get that status or why they were listed at all.

“It’s very difficult to pinpoint the exact degree of transparency,” of SCS said Severin Engelmann, one of the lead researchers based at the Technical University of Munich. Far from being just an experimental startup, SCS is already quite advanced. “Blacklisting and redlisting are already in place, and they clearly indicate what behavior is bad … but not what behavior is actually good,” he said.

Even more interesting, there are more companies on the blacklist and redlist than there are individuals within the Beijing corpus, indicating that while the government is certainly concerned about citizens, it’s bringing its social control mechanism onto companies perhaps more aggressively.

Jens Grossklags, another of the researchers, noted that this level of transparency — while inconsistent — was unusual in the West. “It is really fascinating from a data science perspective to see how much information is being made available not just to individuals but to the general public,” he said. He noted that public shaming has been common with the Chinese system, while Western consumers have a hard time accessing their own scores let alone the scores of others.

The study is one of the first to look at the actual implementation of SCS and reverse engineer its algorithm, and the researchers are potentially following up by investigating regional variations and further changes to the system.

TechCrunch is experimenting with new content forms. This is a rough draft of something new – provide your feedback directly to the author (Danny at danny@techcrunch.com) if you like or hate something here.

Share your feedback on your startup’s attorney

My colleague Eric Eldon and I are reaching out to startup founders and execs about their experiences with their attorneys. Our goal is to identify the leading lights of the industry and help spark discussions around best practices. If you have an attorney you thought did a fantastic job for your startup, let us know using this short Google Forms survey and also spread the word. We will share the results and more in the coming weeks.

Stray Thoughts (aka, what I am reading)

Short summaries and analysis of important news stories

Hustling to nothing

Erin Griffith has a great piece on the increasing pervasiveness of hustle culture. This is part of a long-running debate in Silicon Valley between the work-your-ass-off crowd and the productivity-peaks-at-35-hours crowd. The answer in my mind is that we should see work in phases — running at 100 MPH all the time is most definitely not sustainable, but neither frankly is working a very stable number of hours per week. The vagaries of life and work mean that we need to surge and recede our efforts as dictated, and always track our own health.

Nvidia’s troubles continue

We’ve talked a lot about Nvidia over the past few months (Part 1, Part 2, Part 3). Well, the bad news train just continues. As my colleague Romain Dillet reports, Nvidia is cutting its revenue outlook, and now the stock is falling again (another 14% as I write this). It cites lowered demand particularly from China, which is experiencing a major slowdown in its economy.

Can Chinese startups subsidize customers forever?

The Financial Times asks an important question about the “China model” of startups: should founders heavily subsidize customers in order to buy market share and fight competitors? They point to bike sharing startup Ofo’s collapse, although I would point to the expensive rise of Luckin Coffee as perhaps the latest example. It’s a lesson that Munchery’s investors also have had to learn: at the end of the day, those unit economics better turn positive if a company is to survive.

What’s next

  • More work on societal resilience

This newsletter is written with the assistance of Arman Tabatabai from New York

Former Facebook engineer picks up $15M for AI platform Spell

In 2016, Serkan Piantino packed up his desk at Facebook with hopes to move on to something new. The former Director of Engineering for Faceboook AI Research had every intention to keep working on AI, but quickly realized a huge issue.

Unless you’re under the umbrella of one of these big tech companies like Facebook, it can be very difficult and incredibly expensive to get your hands on the hardware necessary to run machine learning experiments.

So he built Spell, which today received $15 million in Series A funding led by Eclipse Ventures and Two Sigma Ventures.

Spell is a collaborative platform that lets anyone run machine learning experiments. The company connects clients with the best, newest hardware hosted by Google, AWS and Microsoft Azure and gives them the software interface they need to run, collaborate, and build with AI.

“We spent decades getting to a laptop powerful enough to develop a mobile app or a website, but we’re struggling with things we develop in AI that we haven’t struggled with since the 70s,” said Piantino. “Before PCs existed, the computers filled the whole room at a university or NASA and people used terminals to log into a single main frame. It’s why Unix was invented, and that’s kind of what AI needs right now.”

In a meeting with Piantino this week, TechCrunch got a peek at the product. First, Piantino pulled out his MacBook and opened up Terminal. He began to run his own code against MNIST, which is a database of handwritten digits commonly used to train image detection algorithms.

He started the program and then moved over to the Spell platform. While the original program was just getting started, Spell’s cloud computing platform had completed the test in under a minute.

The advantage here is obvious. Engineers who want to work on AI, either on their own or for a company, have a huge task in front of them. They essentially have to build their own computer, complete with the high-powered GPUs necessary to run their tests.

With Spell, the newest GPUs from NVIDIA and Google are virtually available for anyone to run their test.

Individual users can get on for free, specify the type of GPU they need to compute their experiment, and simply let it run. Corporate users, on the other hand, are able to view the runs taking place on Spell and compare experiments, allowing users to collaborate on their projects from within the platform.

Enterprise clients can set up their own cluster, and keep all of their programs private on the Spell platform, rather than running tests on the public cluster.

Spell also offers enterprise customers a ‘spell hyper’ command that offers built-in support for hyperparameter optimization. Folks can track their models and results and deploy them to Kubernetes/Kubeflow in a single click.

But, perhaps most importantly, Spell allows an organization to instantly transform their model into an API that can be used more broadly throughout the organization, or or used directly within an app or website.

The implications here are huge. Small companies and startups looking to get into AI now have a much lower barrier to entry, whereas large traditional companies can build out their own proprietary machine learning algorithms for use within the organization without an outrageous upfront investment.

Individual users can get on the platform for free, whereas enterprise clients can get started for $99/month per host you use over the course of a month. Piantino explains that Spell charges based on concurrent usage, so if the customer has 10 concurrent things running, the company considers that the ‘size’ of the Spell cluster and charges based on that.

Piantino sees Spell’s model as the key to defensibility. Whereas many cloud platforms try to lock customers in to their entire suite of products, Spell works with any language framework and lets users plug and play on the platforms of their choice by simply commodifying the hardware. In fact, Spell doesn’t even share with clients which cloud cluster (Microsoft Azure, Google, or AWS) they’re on.

So, on the one hand the speed of the tests themselves goes up based on access to new hardware, but, because Spell is an agnostic platform, there is also a huge advantage in how quickly one can get set up and start working.

The company plans to use the funding to further grow the team and the product, and Piantino says he has his eye out for top-tier engineering talent as well as a designer.