All posts in “Artificial Intelligence”

SeeTree raises $11.5M to help farmers manage their orchards

SeeTree, a Tel Aviv-based startup that uses drones and artificial intelligence to bring precision agriculture to their groves, today announced that it has raised an $11.5 million Series A funding round led by Hanaco Ventures, with participation from previous investors Canaan Partners Israel, Uri Levine and his investors group, iAngel and Mindset. This brings the company’s total funding to $15 million.

The idea behind the company, which also has offices in California and Brazil, is that in the past, drone-based precision agriculture hasn’t really lived up to its promise and didn’t work all that well for permanent crops like fruit trees. “In the past two decades, since the concept was born, the application of it, as well as measuring techniques, has seen limited success — especially in the permanent-crop sector,” said SeeTree CEO Israel Talpaz. “They failed to reach the full potential of precision agriculture as it is meant to be.”

He argues that the future of precision agriculture has to take a more holistic view of the entire farm. He also believes that past efforts didn’t quite offer the quality of data necessary to give permanent crop farmers the actionable recommendations they need to manage their groves.

SeeTree is obviously trying to tackle these issues and it does so by offering granular per-tree data based on the imagery gathered from drones and the company’s machine learning algorithms that then analyze this imagery. Using this data, farmers can then decide to replace trees that underperform, for example, or map out a plan to selectively harvest based on the size of a tree’s fruits and its development stages. They can then also correlate all of this data with their irrigation and fertilization infrastructure to determine the ROI of those efforts.

“Traditionally, farmers made large-scale business decisions based on intuitions that would come from limited (and often unreliable) small-scale testing done by the naked eye,” said Talpaz. “With SeeTree, farmers can now make critical decisions based on accurate and consistent small and large-scale data, connecting their actions to actual results in the field.”

SeeTree was founded by Talpaz, who like so many Israeli entrepreneurs previously worked for the country’s intelligence services, as well as Barak Hachamov (who you may remember from his early personalized news startup my6sense) and Guy Morgenstern, who has extensive experience as an R&D executive with a background in image processing and communications systems.

[embedded content]

Former Facebook engineer picks up $15M for AI platform Spell

In 2016, Serkan Piantino packed up his desk at Facebook with hopes to move on to something new. The former Director of Engineering for Faceboook AI Research had every intention to keep working on AI, but quickly realized a huge issue.

Unless you’re under the umbrella of one of these big tech companies like Facebook, it can be very difficult and incredibly expensive to get your hands on the hardware necessary to run machine learning experiments.

So he built Spell, which today received $15 million in Series A funding led by Eclipse Ventures and Two Sigma Ventures.

Spell is a collaborative platform that lets anyone run machine learning experiments. The company connects clients with the best, newest hardware hosted by Google, AWS and Microsoft Azure and gives them the software interface they need to run, collaborate, and build with AI.

“We spent decades getting to a laptop powerful enough to develop a mobile app or a website, but we’re struggling with things we develop in AI that we haven’t struggled with since the 70s,” said Piantino. “Before PCs existed, the computers filled the whole room at a university or NASA and people used terminals to log into a single main frame. It’s why Unix was invented, and that’s kind of what AI needs right now.”

In a meeting with Piantino this week, TechCrunch got a peek at the product. First, Piantino pulled out his MacBook and opened up Terminal. He began to run his own code against MNIST, which is a database of handwritten digits commonly used to train image detection algorithms.

He started the program and then moved over to the Spell platform. While the original program was just getting started, Spell’s cloud computing platform had completed the test in under a minute.

The advantage here is obvious. Engineers who want to work on AI, either on their own or for a company, have a huge task in front of them. They essentially have to build their own computer, complete with the high-powered GPUs necessary to run their tests.

With Spell, the newest GPUs from NVIDIA and Google are virtually available for anyone to run their test.

Individual users can get on for free, specify the type of GPU they need to compute their experiment, and simply let it run. Corporate users, on the other hand, are able to view the runs taking place on Spell and compare experiments, allowing users to collaborate on their projects from within the platform.

Enterprise clients can set up their own cluster, and keep all of their programs private on the Spell platform, rather than running tests on the public cluster.

Spell also offers enterprise customers a ‘spell hyper’ command that offers built-in support for hyperparameter optimization. Folks can track their models and results and deploy them to Kubernetes/Kubeflow in a single click.

But, perhaps most importantly, Spell allows an organization to instantly transform their model into an API that can be used more broadly throughout the organization, or or used directly within an app or website.

The implications here are huge. Small companies and startups looking to get into AI now have a much lower barrier to entry, whereas large traditional companies can build out their own proprietary machine learning algorithms for use within the organization without an outrageous upfront investment.

Individual users can get on the platform for free, whereas enterprise clients can get started for $99/month per host you use over the course of a month. Piantino explains that Spell charges based on concurrent usage, so if the customer has 10 concurrent things running, the company considers that the ‘size’ of the Spell cluster and charges based on that.

Piantino sees Spell’s model as the key to defensibility. Whereas many cloud platforms try to lock customers in to their entire suite of products, Spell works with any language framework and lets users plug and play on the platforms of their choice by simply commodifying the hardware. In fact, Spell doesn’t even share with clients which cloud cluster (Microsoft Azure, Google, or AWS) they’re on.

So, on the one hand the speed of the tests themselves goes up based on access to new hardware, but, because Spell is an agnostic platform, there is also a huge advantage in how quickly one can get set up and start working.

The company plans to use the funding to further grow the team and the product, and Piantino says he has his eye out for top-tier engineering talent as well as a designer.

Facebook urged to give users greater control over what they see

Academics at the universities of Oxford and Stanford think Facebook should give users greater transparency and control over the content they see on its platform.

They also believe the social networking giant should radically reform its governance structures and processes to throw more light on content decisions, including by looping in more external experts to steer policy.

Such changes are needed to address widespread concerns about Facebook’s impact on democracy and on free speech, they argue in a report published today which includes a series of recommendations for reforming Facebook (entitled: Glasnost! Nine Ways Facebook Can Make Itself a Better Forum for Free Speech and Democracy.)

“There is a great deal that a platform like Facebook can do right now to address widespread public concerns, and to do more to honour its public interest responsibilities as well as international human rights norms,” writes lead author Timothy Garton Ash.

“Executive decisions made by Facebook have major political, social, and cultural consequences around the world. A single small change to the News Feed algorithm, or to content policy, can have an impact that is both faster and wider than that of any single piece of national (or even EU-wide) legislation.”

Here’s a rundown of the report’s nine recommendations:

  1. Tighten Community Standards wording on hate speech — the academics argue that Facebook’s current wording on key areas is “overbroad, leading to erratic, inconsistent and often context-insensitive takedowns”; and also generating “a high proportion of contested cases”. Clear and tighter wording could make consistent implementation easier, they believe
  2. Hire more and contextually expert content reviewers — “the issue is quality as well as quantity”, the report points out, pressing Facebook to hire more human content reviewers plus a layer of senior reviewers with “relevant cultural and political expertise”; and also to engage more with trusted external sources such as NGOs. “It remains clear that AI will not resolve the issues with the deeply context-dependent judgements that need to be made in determining when, for example, hate speech becomes dangerous speech,” they write
  3. Increase ‘decisional transparency’ — Facebook still does not offer adequate transparency around content moderation policies and practices, they suggest, arguing it needs to publish more detail on its procedures, including specifically calling for the company to “post and widely publicize case studies” to provide users with more guidance and to provide potential grounds for appeals
  4. Expand and improve the appeals process — also on appeals, the report recommends Facebook gives reviewers much more context around disputed pieces of content, and also provide appeals statistics data to analysts and users. “Under the current regime, the initial internal reviewer has very limited information about the individual who posted a piece of content, despite the importance of context for adjudicating appeals,” they write. “A Holocaust image has a very different significance when posted by a Holocaust survivor or by a Neo-Nazi.” They also suggest Facebook should work on developing “a more functional and usable for the average user” appeals due process, in dialogue with users — such as with the help of a content policy advisory group
  5. Provide meaningful News Feed controls for users — the report suggests Facebook users should have more meaningful controls over what they see in the News Feed, with the authors dubbing current controls as “altogether inadequate”, and advocating for far more. Such as the ability to switch off the algorithmic feed entirely (without the chronological view being defaulted back to algorithm when the user reloads, as is the case now for anyone who switches away from the AI-controlled view). The report also suggests adding a News Feed analytics feature, to give users a breakdown of sources they’re seeing and how that compares with control groups of other users. Facebook could also offer a button to let users adopt a different perspective by exposing them to content they don’t usually see, they suggest
  6. Expand context and fact-checking facilities — the report pushes for “significant” resources to be ploughed into identifying “the best, most authoritative, and trusted sources” of contextual information for each country, region and culture — to help feed Facebook’s existing (but still inadequate and not universally distributed) fact-checking efforts
  7. Establish regular auditing mechanisms — there have been some civil rights audits of Facebook’s processes (such as this one, which suggested Facebook formalizes a human rights strategy) but the report urges the company to open itself up to more of these, suggesting the model of meaningful audits should be replicated and extended to other areas of public concern, including privacy, algorithmic fairness and bias, diversity and more
  8. Create an external content policy advisory group — key content stakeholders from civil society, academia and journalism should be enlisted by Facebook for an expert policy advisory group to provide ongoing feedback on its content standards and implementation; as well as also to review its appeals record. “Creating a body that has credibility with the extraordinarily wide geographical, cultural, and political range of Facebook users would be a major challenge, but a carefully chosen, formalized, expert advisory group would be a first step,” they write, noting that Facebook has begun moving in this direction but adding: “These efforts should be formalized and expanded in a transparent manner.”
  9. Establish an external appeals body — the report also urges “independent, external” ultimate control of Facebook’s content policy, via an appeals body that sits outside the mothership and includes representation from civil society and digital rights advocacy groups. The authors note Facebook is already flirting with this idea, citing comments made by Mark Zuckerberg last November, but also warn this needs to be done properly if power is to be “meaningfully” devolved. “Facebook should strive to make this appeals body as transparent as possible… and allow it to influence broad areas of content policy… not just rule on specific content takedowns,” they warn

In conclusion, the report notes that the content issues it’s focused on are not only attached to Facebook’s business but apply widely across various Internet platforms — hence growing interest in some form of “industry-wide self-regulatory body”. Though it suggests that achieving that kind of overarching regulation will be “a long and complex task”.

In the meanwhile the academics remain convinced there is “a great deal that a platform like Facebook can do right now to address widespread public concerns, and to do more to honour its public interest responsibilities, as well as international human rights norms” — with the company front and center of the frame given its massive size (2.2BN+ active users).

“We recognize that Facebook employees are making difficult, complex, contextual judgements  every day, balancing competing interests, and not all those decisions will benefit from full transparency. But all would be better for more regular, active interchange with the worlds of academic research, investigative journalism, and civil society advocacy,” they add.

We’ve reached out to Facebook for comment on their recommendations.

The report was prepared by the Free Speech Debate project of the Dahrendorf Programme for the Study of Freedom, St. Antony’s College, Oxford, in partnership with the Reuters Institute for the Study of Journalism, University of Oxford, the Project on Democracy and the Internet, Stanford University, and the Hoover Institution, Stanford University.

Last year we offered a few of our own ideas for fixing Facebook — including suggesting the company hire orders of magnitude more expert content reviewers, as well as providing greater transparency into key decisions and processes.

Robots learn to grab and scramble with new levels of agility

Robots are amazing things, but outside of their specific domains they are incredibly limited. So flexibility — not physical, but mental — is a constant area of research. A trio of new robotic setups demonstrate ways they can evolve to accommodate novel situations: using both “hands,” getting up after a fall, and understanding visual instructions they’ve never seen before.

The robots, all developed independently, are gathered together today in a special issue of the journal Science Robotics dedicated to learning. Each shows an interesting new way in which robots can improve their interactions with the real world.

On the other hand…

First there is the question of using the right tool for a job. As humans with multi-purpose grippers on the ends of our arms, we’re pretty experienced with this. We understand from a lifetime of touching stuff that we need to use this grip to pick this up, we need to use tools for that, this will be light, that heavy, and so on.

Robots, of course, have no inherent knowledge of this, which can make things difficult; it may not understand that it can’t pick up something of a given size, shape, or texture. A new system from Berkeley roboticists acts as a rudimentary decision-making process, classifying objects as able to be grabbed either by an ordinary pincer grip or with a suction cup grip.

A robot, wielding both simultaneously, decides on the fly (using depth-based imagery) what items to grab and with which tool; the result is extremely high reliability even on piles of objects it’s never seen before.

It’s done with a neural network that consumed millions of data points on items, arrangements, and attempts to grab them. If you attempted to pick up a teddy bear with a suction cup and it didn’t work the first ten thousand times, would you keep on trying? This system learned to make that kind of determination, and as you can imagine such a thing is potentially very important for tasks like warehouse picking for which robots are being groomed.

Interestingly, because of the “black box” nature of complex neural networks, it’s difficult to tell what exactly Dex-Net 4.0 is actually basing its choices on, although there are some obvious preferences, explained Berkeley’s  Ken Goldberg in an email.

“We can try to infer some intuition but the two networks are inscrutable in that we can’t extract understandable ‘policies,’ ” he wrote. “We empirically find that smooth planar surfaces away from edges generally score well on the suction model and pairs of antipodal points generally score well for the gripper.”

Now that reliability and versatility are high, the next step is speed; Goldberg said that the team is “working on an exciting new approach” to reduce computation time for the network, to be documented, no doubt, in a future paper.

ANYmal’s new tricks

Quadrupedal robots are already flexible in that they can handle all kinds of terrain confidently, even recovering from slips (and of course cruel kicks). But when they fall, they fall hard. And generally speaking they don’t get up.

The way these robots have their legs configured makes it difficult to do things in anything other than an upright position. But ANYmal, a robot developed by ETH Zurich (and which you may recall from its little trip to the sewer recently), has a more versatile setup that gives its legs extra degrees of freedom.

What could you do with that extra movement? All kinds of things. But it’s incredibly difficult to figure out the exact best way for the robot to move in order to maximize speed or stability. So why not use a simulation to test thousands of ANYmals trying different things at once, and use the results from that in the real world?

This simulation-based learning doesn’t always work, because it isn’t possible right now to accurately simulate all the physics involved. But it can produce extremely novel behaviors or streamline ones humans thought were already optimal.

At any rate that’s what the researchers did here, and not only did they arrive at a faster trot for the bot (above), but taught it an amazing new trick: getting up from a fall. Any fall. Watch this:

It’s extraordinary that the robot has come up with essentially a single technique to get on its feet from nearly any likely fall position, as long as it has room and the use of all its legs. Remember, people didn’t design this — the simulation and evolutionary algorithms came up with it by trying thousands of different behaviors over and over and keeping the ones that worked.

Ikea assembly is the killer app

Let’s say you were given three bowls, with red and green balls in the center one. Then you’re given this on a sheet of paper:

As a human with a brain, you take this paper for instructions, and you understand that the green and red circles represent balls of those colors, and that red ones need to go to the left, while green ones go to the right.

This is one of those things where humans apply vast amounts of knowledge and intuitive understanding without even realizing it. How did you choose to decide the circles represent the balls? Because of the shape? Then why don’t the arrows refer to “real” arrows? How do you know how far to go to the right or left? How do you know the paper even refers to these items at all? All questions you would resolve in a fraction of a second, and any of which might stump a robot.

Researchers have taken some baby steps towards being able to connect abstract representations like the above with the real world, a task that involves a significant amount of what amounts to a sort of machine creativity or imagination.

Making the connection between a green dot on a white background in a diagram and a greenish roundish thing on a black background in the real world isn’t obvious, but the “visual cognitive computer” created by Miguel Lázaro-Gredilla and his colleagues at Vicarious AI seems to be doing pretty well at it.

It’s still very primitive, of course, but in theory it’s the same toolset that one uses to, for example, assemble a piece of Ikea furniture: look at an abstract representation, connect it to real-world objects, then manipulate those objects according to the instructions. We’re years away from that, but it wasn’t long ago that we were years away from a robot getting up from a fall or deciding a suction cup or pincer would work better to pick something up.

The papers and videos demonstrating all the concepts above should be available at the Science Robotics site.

HyperScience, the machine learning startup tackling data entry, raises $30 million Series B

HyperScience, the machine learning company that turns human readable data into machine readable data, has today announced the close of a $30 million Series B funding round led by Stripes Group, with participation from existing investors FirstMark Capital and Felicis Ventures as well as new investors Battery Ventures, Global Founders Fund, TD Ameritrade, and QBE.

HyperScience launched out of stealth in 2016 with a suite of enterprise products focused on the healthcare, insurance, finance and government industries. The original products were HSForms (which handled data-entry by converting hand-written forms to digital), HSFreeForm (which did a similar function for hand-written emails or other non-form content) and HSEvaluate (which could parse through complex data on a form to help insurance companies approve or deny claims by pulling out all the relevant info).

Now, the company has combined all three of those products into a single product called HyperScience. The product is meant to help companies and organizations reduce their data-entry backlog and better serve their customers, saving money and resources.

The idea is that many of the forms we use in life or in the workplace are in an arbitrary format. My bank statements don’t look the same as your bank statements, and invoices from your company might look different than invoices from my company.

HyperScience is able to take those forms and pipe them into the system quickly and easily, without help from humans.

Instead of charging by seat, HyperScience charges by documents, as the mere use of HyperScience should mean that fewer humans are actually ‘using’ the product.

The latest round brings HyperScience’s total funding to $50 million, and the company plans to use a good deal of that funding to grow the team.

“We have a product that works and a phenomenally good product market fit,” said CEO Peter Brodsky. “What will determine our success is our ability to build and scale the team.”