The road to Lighthouse (and what lies ahead)

The road to Lighthouse (and what lies ahead)

Today, I’m excited to announce that Lighthouse is available to the general public. 

It’s been an incredible journey thus far, thanks to the tireless efforts of our outstanding team as well as invaluable feedback from early customers. I wanted to use this milestone to reflect on the unique moment we’re in, how far we’ve come — and how excited we are for what’s next.

In this moment: The tipping point

We’re currently at a tipping point where three fundamental technologies will have a huge positive impact on our lives.

3D sensing: It’s so easy for humans to interpret what we see that we don’t realize how hard it is for computers to do the same. Despite all the advancements in recent years, it is still very difficult to produce actually useful computer vision systems. 3D sensors provide a way to “cheat” — they directly measure the 3D structure of the environment, and in doing so, turn many of the difficult research problems of computer vision into tractable engineering problems. This is why virtually every self-driving car program centers their efforts around 3D sensing with laser range finders like Velodyne or Luminar.

Natural language understanding: The recent rise of interactive assistants like Alexa and Google Assistant underscores how much natural language understanding is changing the way we get computers to do things for us. NLU is a fundamental improvement in how quickly and comprehensively you can get a computer to do what you want it to do — instead of translating a thought into a sequence of button presses, now you just say the thought.  And we’re just scratching the surface of the capabilities in this area. 

Deep learning: The basic mathematics of deep learning have been around for decades, but only recently has sufficient computing power become available to train very large and accurate models. Previously, computer vision was useful only in a few restricted areas, like detecting frontally-aligned faces or recognizing flat textured objects like paintings. When my co-founder Hendrik and I were in the early days of our work on self-driving cars at Stanford, it was literally science fiction to imagine recognizing cats in images with high accuracy. Deep learning has changed this.

Looking back: How we got here

For us, these developments hold special significance: They are the foundation of the technology Lighthouse was built on, and their potential is what will shape our roadmap in the years to come. 

As you may know, Hendrik and I both come from computer vision backgrounds: Hendrik was part of the winning team of the DARPA Grand Challenge, and I worked on perception systems for self-driving cars. It was because of our work in these fields that we became inspired to develop and deploy these technologies to give our physical spaces useful and accessible intelligence — starting with the home.

Since then, we’ve worked hard to build a totally new kind of camera, and a totally new kind of AI service. Using the techniques we’d applied to self-driving cars, we built Lighthouse from the ground up. Custom optics for a time-of-flight camera that directly measures the 3D structure of the environment. Recurrent neural networks for computer vision specifically tailored for use cases within the home. And a natural language interface to simplify — and amplify — the user experience.

With these three different technologies working in tandem, we quickly realized that we could be so much more than a security camera. Instead of building a camera that could only record and replay things that happened in the past, we could create a truly intelligent visual perception system for the home that could keep track of events as they happened, and alert the homeowner as needed. 

So we built, and tested, and iterated, based on our own learnings along the way as well as the feedback of our intrepid early users. We developed a camera that can differentiate between pets, people and shadows; that learns family members by face and can pull up video clips by name; and that will alert you when the dog walker arrives, but also if your elderly parents aren’t in the kitchen by a certain time each day. We created a new conduit through which busy people can connect and interact with their households, enriching connections through intelligence.

The road ahead: The future of Lighthouse AI

So, today is the day: Lighthouse is launching to the general public. Getting to this day has been exhilarating and humbling in equal measure, and we are thrilled to have more people experience the technology that we’ve built — and hopefully see the magic of AI as we do. We’ll continue to iterate and improve; in many ways, this is Day 1.

At the same time, we’ll always keep our eye on the horizon for what’s coming next. Our ultimate vision for Lighthouse is to provide useful and accessible intelligence for all physical spaces. An AI camera for the home is just the beginning, and we’re excited for the road ahead.

Thank you for joining us for the ride. It’s going to be a fun one.

 

Alex Teichman, CEO
 

Straight talk. Here's why our camera costs $299.

Straight talk. Here's why our camera costs $299.

Alexa, Google Assistant, and the Rise of Natural Language Processing

Alexa, Google Assistant, and the Rise of Natural Language Processing