All posts in “machine learning”

Badi gets $30M for AI-aided room rentals

Should you let AI help you pick your roommates? Barcelona-based urban room rental startup Badi thinks so, and it’s just closed a $30M Series B funding round less than a year after a $10M Series A — suggesting algorithm-aided matchmaking is resonating with its target Millennial(ish) ‘Generation Rent’ demographic, as they hunt for their next flatmate.

The 2015 founded startup has now raised circa $45M in total, while its platform has passed 12M rental requests. Badi also tells us it passed one million registered users last November, up from around 700,000 in February 2018.

It currently offers a service in key cities in four European markets: Spain, France, Italy and the U.K.

The business was set up to respond to the rising trend of urban living (and indeed tourism) that’s been driving rents and squeezing more people into shared houses to try to make city living affordable.

Badi CEO and founder Carlos Pierre points to estimates that by 2050 the total population living in cities will increase from 54% to 66%. “There will likely be a shortage of homes for people looking to live in cities and as a result, this will lead to an increase in smaller living units or rooms. This is where Badi comes in,” he suggests in a statement.

On the AI front, Badi applies machine learning technology to help with the flatmate matching process — learning from users of its platform, as they match and agree to become flatmates, and then feeding ‘compatibility insights’ back in to keep improving its recommendations.

The Series B is led by U.S.-based consumer tech VC firm Goodwater Capital, making its first investment in a Spanish startup. Also investing Target Global and existing VCs Spark Capital and Mangrove Capital.

Badi says the funding will be put towards consolidating its services in Barcelona, Madrid, London, Paris and Rome, and also to open new offices in London.

It says it’s spying a big opportunity there (despite Brexit) on account of the UK capital being one of the most most expensive for renters in the region.

Two other cities it operates in, Barcelona and Madrid, are similarly in demand with renters (and tourists), with Badi noting the rental market in Spain has grown by 130% in the last 10 years and represents 23% of the entire real estate industry.

While Paris and Rome are also major tourist destinations, and short term tourist rentals have been widely linked to increased rents for locals.

Badi’s business is positioned to benefit from the tourist-inflated rent trend as it stands, though cities like Barcelona are also looking at what they can do, policy wise, to curb rising rents and ensure there is affordable and adequate living space for local families, such as via social housing quotas on developers and even buying vacant buildings themselves to convert to housing stock.

But despite increased political attention on the problem of a lack of affordable housing in cities in desirable urban hotspots it’s highly unlikely that housing pressures are going to let up any time soon.

Badi says the Series B will also be used to expand the size of its team, up to 100%, and also to develop additional extra services intended to make life easier for landlords and tenants.

“In the first quarter of 2019, we will work on improving our product to offer possibilities for professionals and private owners to make their experience on Badi far more efficient. Secondly we are redesigning and launching a new booking system around April 2019 to enhance the booking experience to make it more streamlined and user-centric,” it tells us.

Commenting on the funding in a statement, Chi-Hua Chien, co-founder of Goodwater Capita, added: We are extremely excited to partner with Badi in their mission to solve the looming urban housing crisis — there simply aren’t enough homes in cities and housing has become too expensive. Badi provides a unique end-to-end rental platform that builds trust and convenience directly into the customer experience, which has enabled them to unlock thousands of new rooms in cities around the world.”

Score more than 50 hours of training in machine learning for less than $40

Just to let you know, if you buy something featured here, Mashable might earn an affiliate commission.

Image: Pexels

Computers may not wear tennis shoes (yet), but thanks to developing artificial intelligence technologies, they’re smarter than ever before. Along with those technologies has come a relatively new category of computer science called machine learning, or ML.

Similar to statistics, ML involves computer systems that utilize algorithms to automatically learn about data, recognize patterns, and make decisions, all without outside intervention or explicit directions from human beings. In the real world, you can find it being used in smart assistants like Siri and the Amazon Echo, in online fraud detection services, in the facial recognition feature that identifies photos of you on Facebook, and more recently, in Tesla’s self-driving car.

ML is distinctive in the world of AI in that it can be used to process vast amounts of data quickly, making it a desirable tech skill among job applicants not only in the fields of computer science and engineering, but also marketing, health care, finance, social media, and beyond. 

For a comprehensive education in ML that doesn’t require a four-year-plus stint at college, look no further than the Machine Learning Masterclass Bundle that just went on sale in the Mashable Shop. Often considered complex, ML is broken down into easily digestible lessons over the course of its eight online classes, which you can access wherever and whenever you’ve got a Wi-Fi connection. With more than 50 hours of hands-on training in ML and related concepts, including the TensorFlow framework, the programming languages R and Python, and the game development engine Unreal, you’ll be an ML master in no time.

A $449 value, the Machine Learning Masterclass Bundle is yours today for just $49 — a price you can lower to $39.69 by entering the code MERRY15 at checkout.

IBM Research develops fingerprint sensor to monitor disease progression

IBM today announced that it has developed a small sensor that sits on a person’s fingernail to help monitor the effectiveness of drugs used to combat the symptoms of Parkinson’s and other diseases. Together with the custom software that analyzes the data, the sensor measures how the nail warps as the user grips something. Because virtually any activity involves gripping objects, that creates a lot of data for the software to analyze.

Another way to get this data would be to attach a sensor to the skin and capture motion, as well as the health of muscles and nerves that way. The team notes that skin-based sensors can cause plenty of other problems, including infections, so it decided to look at using data from how a person’s fingernails bend instead.

For the most part, though, fingernails don’t bend all that much, so the sensor had to be rather sensitive. “It turns out that our fingernails deform — bend and move — in stereotypic ways when we use them for gripping, grasping, and even flexing and extending our fingers,” the researchers explain. “This deformation is usually on the order of single digit microns and not visible to the naked eye. However, it can easily detect with strain gauge sensors. For context, a typical human hair is between 50 and 100 microns across and a red blood cell is usually less than 10 microns across.”

In its current version, the researchers glue the prototype to the nail. Because fingernails are pretty tough, there’s very little risk in doing so, especially when compared to a sensor that would sit on the skin. The sensor then talks to a smartwatch that runs machine learning models to detect tremors and other symptoms of Parkinson’s disease. That model can detect what a wearer is doing (opening a doorknob, using a screwdriver, etc.). The data and the model are accurate enough to track when wearers write digits with their fingers.

Over time, the team hopes that it can extend this prototype and the models that analyze the data to recognize other diseases as well. There’s no word on when this sensor could make it onto the market, though.

[embedded content]

K Health raises $25m for its AI-powered primary care platform

K Health, the startup providing consumers with an AI-powered primary care platform, has raised $25 million in series B funding. The round was led by 14W, Comcast Ventures and Mangrove Capital Partners, with participation from Lerer Hippeau, Primary Ventures, BoxGroup, Bessemer Venture Partners and Max Ventures – all previous investors from the company’s seed or Series A rounds.

Co-founded and led by former Vroom CEO and Wix co-CEO, Allon Bloch, K Health (previously Kang Health) looks to equip consumers with a free and easy-to-use application that can provide accurate, personalized, data-driven information about their symptoms and health.

“When your child says their head hurts, you can play doctor for the first two questions or so – where does it hurt? How does it hurt?” Bloch explained in a conversation with TechCrunch. “Then it gets complex really quickly. Are they nauseous or vomiting? Did anything unusual happen? Did you come back from a trip somewhere? Doctors then use differential diagnosis to prove that it’s a tension headache vs other things by ruling out a whole list of chronic or unusual conditions based on their deep knowledge sets.”

K Health’s platform, which currently focuses on primary care, effectively looks to perform a simulation and data-driven version of the differential diagnosis process. On the company’s free mobile app, users spend three-to-four minutes answering an average of 21 questions about their background and the symptoms they’re experiencing.

Using a data set of two billion historical health events over the past 20 years – compiled from doctors notes, lab results, hospitalizations, drug statistics and outcome data – K Health is able to compare users to those with similar symptoms and medical histories before zeroing in on a diagnosis. 

With its expansive comparative approach, the platform hopes to offer vastly more thorough, precise and user-specific diagnostic information relative to existing consumer alternatives, like WebMD or – what Bloch calls – “Dr. Google”, which often produce broad, downright frightening, and inaccurate diagnoses. 

Users are able to see cases and diagnoses that had symptoms similar to their own, with K Health notifying users with serious conditions when to consider seeking immediate care. (K Health Press Image / K Health / https://www.khealth.ai)

In addition to pure peace of mind, the utility provided to consumers is clear. With more accurate at-home diagnostic information, users are able to make better preventative health decisions, avoid costly and unnecessary trips to in-person care centers or appointments with telehealth providers, and engage in constructive conversations with physicians when they do opt for in-person consultations.

K Health isn’t looking to replace doctors, and in fact, believes its platform can unlock tremendous value for physicians and the broader healthcare system by enabling better resource allocation. 

Without access to quality, personalized medical information at home, many defer to in-person doctor visits even when it may not be necessary. And with around one primary care physician per 1000 in the US, primary care practitioners are subsequently faced with an overwhelming number of patients and are unable to focus on more complex cases that may require more time and resources. The high volume of patients also forces physicians to allocate budgets for support staff to help interact with patients, collect initial background information and perform less-demanding tasks.

K Health believes that by providing an accurate alternative for those with lighter or more trivial symptoms, it can help lower unnecessary in-person visits, reduce costs for practices and allow physicians to focus on complicated, rare or resource-intensive cases where their expertise can be most useful and where brute machine processing power is less valuable.

The startup is looking to enhance the platform’s symbiotic patient-doctor benefits further in early-2019, when it plans to launch in-app capabilities that allow users to share their AI-driven health conversations directly with physicians, hopefully reducing time spent on information gathering and enabling more-informed treatment.

With K Health’s AI and machine learning capabilities, the platform also gets smarter with every conversation as it captures more outcomes, hopefully enriching the system and becoming more valuable to all parties over time. Initial results seem promising with K Health currently boasting around 500,000 users, most having joined since this past July.

With the latest round, the company has raised a total of $37.5 million since its late-2016 founding. K Health plans to use the capital to ramp up marketing efforts, further refine its product and technology, and perform additional research to identify methods for earlier detection and areas outside of primary care where the platform may be valuable.

Longer term, the platform has much broader aspirations of driving better health outcomes, normalizing better preventative health behavior, and creating more efficient and affordable global healthcare systems.

The high costs of the American healthcare system and the impacts they have on health behavior has been well-documented. With heavy copays, premiums and treatment cost, many avoid primary care altogether or opt for more reactionary treatment, leading to worse health outcomes overall.

Issues seen in the American healthcare system are also observable in many emerging market countries with less medical infrastructure. According to the World Health Organization, the international standard for the number of citizens per primary care physician is one for every 1,500 to 2,000 people, with some countries facing much steeper gaps – such as China, where there is only one primary care doctor for every 6,666.

The startup hopes it can help limit the immense costs associated with emerging countries educating millions of doctors for eight-to-ten years and help provide more efficient and accessible healthcare systems much more quickly.

By reducing primary care costs for consumers and operating costs for medical practices, while creating a more convenient diagnostic experience, K Health believes it can improve access to information, ultimately driving earlier detection and better health outcomes for consumers everywhere.

Why you need a supercomputer to build a house

When the hell did building a house become so complicated?

Don’t let the folks on HGTV fool you. The process of building a home nowadays is incredibly painful. Just applying for the necessary permits can be a soul-crushing undertaking that’ll have you running around the city, filling out useless forms, and waiting in motionless lines under fluorescent lights at City Hall wondering whether you should have just moved back in with your parents.

Consider this an ongoing discussion about Urban Tech, its intersection with regulation, issues of public service, and other complexities that people have full PHDs on. I’m just a bitter, born-and-bred New Yorker trying to figure out why I’ve been stuck in between subway stops for the last 15 minutes, so please reach out with your take on any of these thoughts: @Arman.Tabatabai@techcrunch.com.

And to actually get approval for those permits, your future home will have to satisfy a set of conditions that is a factorial of complex and conflicting federal, state and city building codes, separate sets of fire and energy requirements, and quasi-legal construction standards set by various independent agencies.

It wasn’t always this hard – remember when you’d hear people say “my grandparents built this house with their bare hands?” These proliferating rules have been among the main causes of the rapidly rising cost of housing in America and other developed nations. The good news is that a new generation of startups is identifying and simplifying these thickets of rules, and the future of housing may be determined as much by machine learning as woodworking.

Photo by Bill Oxford via Getty Images

Cities once solely created the building codes that dictate the requirements for almost every aspect of a building’s design, and they structured those guidelines based on local terrain, climates and risks. Over time, townships, states, federally-recognized organizations and independent groups that sprouted from the insurance industry further created their own “model” building codes.

The complexity starts here. The federal codes and independent agency standards are optional for states, who have their own codes which are optional for cities, who have their own codes that are often inconsistent with the state’s and are optional for individual townships. Thus, local building codes are these ever-changing and constantly-swelling mutant books made up of whichever aspects of these different codes local governments choose to mix together. For instance, New York City’s building code is made up of five sections, 76 chapters and 35 appendices, alongside a separate set of 67 updates (The 2014 edition is available as a book for $155, and it makes a great gift for someone you never want to talk to again).

In short: what a shit show.

Because of the hyper-localized and overlapping nature of building codes, a home in one location can be subject to a completely different set of requirements than one elsewhere. So it’s really freaking difficult to even understand what you’re allowed to build, the conditions you need to satisfy, and how to best meet those conditions.

There are certain levels of complexity in housing codes that are hard to avoid. The structural integrity of a home is dependent on everything from walls to erosion and wind-flow. There are countless types of material and technology used in buildings, all of which are constantly evolving.

Thus, each thousand-page codebook from the various federal, state, city, township and independent agencies – all dictating interconnecting, location and structure-dependent needs – lead to an incredibly expansive decision tree that requires an endless set of simulations to fully understand all the options you have to reach compliance, and their respective cost-effectiveness and efficiency.

So homebuilders are often forced to turn to costly consultants or settle on designs that satisfy code but aren’t cost-efficient. And if construction issues cause you to fall short of the outcomes you expected, you could face hefty fines, delays or gigantic cost overruns from redesigns and rebuilds. All these costs flow through the lifecycle of a building, ultimately impacting affordability and access for homeowners and renters.

Photo by Caiaimage/Rafal Rodzoch via Getty Images

Strap on your hard hat – there may be hope for your dream home after all.

The friction, inefficiencies, and pure agony caused by our increasingly convoluted building codes have given rise to a growing set of companies that are helping people make sense of the home-building process by incorporating regulations directly into their software.

Using machine learning, their platforms run advanced scenario-analysis around interweaving building codes and inter-dependent structural variables, allowing users to create compliant designs and regulatory-informed decisions without having to ever encounter the regulations themselves.

For example, the prefab housing startup Cover is helping people figure out what kind of backyard homes they can design and build on their properties based on local zoning and permitting regulations.

Some startups are trying to provide similar services to developers of larger scale buildings as well. Just this past week, I covered the seed round for a startup called Cove.Tool, which analyzes local building energy codes – based on location and project-level characteristics specified by the developer – and spits out the most cost-effective and energy-efficient resource mix that can be built to hit local energy requirements.

And startups aren’t just simplifying the regulatory pains of the housing process through building codes. Envelope is helping developers make sense of our equally tortuous zoning codes, while Cover and companies like Camino are helping steer home and business-owners through arduous and analog permitting processes.

Look, I’m not saying codes are bad. In fact, I think building codes are good and necessary – no one wants to live in a home that might cave in on itself the next time it snows. But I still can’t help but ask myself why the hell does it take AI to figure out how to build a house? Why do we have building codes that take a supercomputer to figure out?

Ultimately, it would probably help to have more standardized building codes that we actually clean-up from time-to-time. More regional standardization would greatly reduce the number of conditional branches that exist. And if there was one set of accepted overarching codes that could still set precise requirements for all components of a building, there would still only be one path of regulations to follow, greatly reducing the knowledge and analysis necessary to efficiently build a home.

But housing’s inherent ties to geography make standardization unlikely. Each region has different land conditions, climates, priorities and political motivations that cause governments to want their own set of rules.

Instead, governments seem to be fine with sidestepping the issues caused by hyper-regional building codes and leaving it up to startups to help people wade through the ridiculousness that paves the home-building process, in the same way Concur aids employee with infuriating corporate expensing policies.

For now, we can count on startups that are unlocking value and making housing more accessible, simpler and cheaper just by making the rules easier to understand. And maybe one day my grandkids can tell their friends how their grandpa built his house with his own supercomputer.

And lastly, some reading while in transit: