Crossing a Brdg

Roughly two weeks ago was my last day as a visiting scientist at Google.  It's been four years, one full-time in Mountain View and three working there one day per week in Pittsburgh after my return to Carnegie Mellon.  I'm also about to hit send on an email requesting a leave from CMU starting in January, because...

We're joining the startup race!

Together with Michael Kaminsky (my attached-at-the-hip co-advisor and co-author of 15 years), Robbie Sedgewick (formerly Apple, Uber, Google), and Ash Munshi (co-founder with me, Mu, and Alex Smola at Marianas Labs five years ago, currently CEO of Pepperdata, former one-time CTO of Yahoo), we're creating a little company that we're very excited about.

BrdgAI, complete with 2019-era vowel dropping, aims to be the connection between cloud-based machine learning and companies that produce enormous amounts of data at the edge from video and other sensors.  Many real-world deployments of modern machine learning operate with bandwidth constraints from the edge:  Agriculture, mining -- even retail -- can collect vastly more data than they can affordably upload to the cloud for storage, processing, labeling, and training of better models.  That's where we hope BrdgAI will come in:  Applying enough intelligence at the edge to prioritize transmission of data into the cloud, using it to train better models and detect when the input into the local model has shifted far enough from the training data that the model needs to be fixed, and managing the metadata associated with petabytes of data to keep it searchable and usable.  

At the same time, we're convinced the best plan is to facilitate the use of the massive amounts of in-house ML expertise that the hyperscalers/cloud hosts are creating.  From Google's CloudML to Amazon's SageMaker and Azure's Machine Learning Studio, reinventing the wheel strikes us as unwise.  So we're aiming to help companies leverage the existing expertise and effort these giants have put into their systems, such as automatically creating and tuning models, without having to store and move petabytes of data to do it, or manage multiple models running on lots of different devices.  And with the latency and cost benefits of being able to run most inference locally at the edge to enable near-realtime response.

(How's the sales pitch working?  :-).

To borrow a phrase, I'm uncomfortably excited about doing this.  I've been pondering for a while now what the Next Big Thing I should do would be (inner voice:  I think this is what they call a mid-life crisis, Dave!), whether that would be a larger, coherent project at CMU, exploring more some of the tremendously fun systems-meets-ML work at Google, or, well - this.  I have a lot of mostly-wonderful things to say about my time at Google that I probably won't find time to write down, but it was time to kick myself out of my comfort zone again.  Things have moved fast, we've found seed funding, and ... eek it's time to start building!

Comments

  1. Congrats! I can't wait to hear more about it.

    ReplyDelete

Post a Comment

Popular posts from this blog

Reflecting on CS Graduate Admissions

Chili Crisp Showdown: Laoganma and Flybyjing

Two examples from the computer science review and publication process