WEEKLY ARTICLE

Ray-Ban Meta Glasses: Truly Wearable AI?

Giving AI Eyes and Ears

In Full...

I’ve been excited to get my hands on the new Ray-Ban Meta Glasses, and picked up a pair yesterday.

An illustration of a Robot wearing Ray Ban Meta Sunglasses
Me, wearing my new glasses

The most intriguing aspect of the glasses for me is the prospect of mixed-mode AI without taking my phone out of my pocket. Meta won’t release this until probably next year, but I do have some observations on how we could get there slightly sooner.

Open AI released their multi-modal version of GPT-Chat about a month ago, which means that you can now speak to Chat GPT (an oddly stilted style of conversation which is still quite compelling, I wrote about it here) and send it images which it can interpret and tell you about.

One of the cool features that Open AI included in the voice chat version is that on iOS the conversation is treated as a “Live Activity” – that means that you can continue the conversation whilst the phone is locked or you are using other apps.

What this also means is that the Ray-Ban Metas do have an AI that you can talk to, in as much as any Bluetooth headphones connected to an iPhone can be used to talk to the ChatGPT bot whilst your phone is in your pocket. I’ve looked at options to have this trigger via an automation and shortcut when the glasses connect to my phone but ultimately don’t think that is very useful - I don’t want an AI listening all the time, I want to be able to trigger it when I want it. It did lead me to add an “Ask AI” shortcut to my home screen which immediately starts a voice conversation with ChatGPT which I suppose will help me to understand how useful a voice assistant actually is over time. I also had high hopes that using “Hey Siri” would be able to trigger the shortcut, which it can, but not when the phone is locked. So close and yet so far.

As I said above though, this feature is also something that all headphones can be used for. The grail, and ultimate reason for getting the Ray-Bans, is in letting the AI see what you can see. Given that this feature won’t be officially released until probably next year, what options do we have?

The solution may come in the form of another Meta product, WhatsApp. I built a simple WhatsApp bot earlier this year which allows me to conduct an ongoing conversation with the API version of GPT-4, it’s quite rudimentary but does the job. The cool thing about the decision to deeply integrate the Meta glasses with other Meta products is that you can send messages and photos on WhatsApp directly from the glasses without opening your phone. The glasses will also read out incoming messages to you. This works pretty well with the bot that I’ve built; I can send messages using the glasses and they will read back the responses. I can say to the glasses “Hey Meta, send a photo to MyBot on Whatsapp” and it will take a snap and send it straight away. The GPT-4V(ision) API hasn’t been released yet, but once it has been, then I’ll be able to send the pictures to the bot via WhatsApp and the glasses will be able to read back the response.

This all feels pretty convoluted though and is ultimately an attempt to hack my way around the lack of available functionality. The Meta Glasses are quite cool but they aren’t wearable AI. Yet.

As with many things within the space at the moment, the technology feels tantalisingly close but not quite there. The last time anything felt quite like this to me though was the dawn of the smartphone era. Playing with the glasses has made me oddly reminiscent of playing with the accelerometer in the Nokia N95… if we’re at that point with AI then the iPhone moment is just around the corner.>

Share on: