The production of too many useful things results in too many useless people. – Karl Marx
Last week it was Facebook sharing all the new features we should get excited about and this week Google took the podium to do pretty much the same. Google I/O is Google’s annual event that shares all the new goodies its working on (many of which we should really be fearing), so that people start seeing where this world is going; could be going.
We’ve compiled the stuff you need to know both for your own sanity and in case you’re itching to jump onto the Google train, which we think will end up leading us into some vault with a robot named Eddie telling us what we’d like to eat in the morning. Yeah, picture that guys.
Proceed with caution.
— Vala Afshar (@ValaAfshar) May 11, 2018
Probably one of the biggest announcements from the conference was that Google Assistant can now make calls on your behalf (see Duplex), and it sounds like an actual human is speaking. The Assistant is able to understand context and little nuances in the conversation, much like us humans can, and the demonstration Google did was pretty terrifying — if you were the person on the other end you probably would have no idea it’s not a human speaking to you. We instantly mind-darted to Home Alone moments and then pondered: Will we need to decipher if we are actually talking to a human being? Are we going to know? Are we just not going to talk anymore?! Yay? Watch out call centres, they’re coming for you; the Assistant could replace an entire industry.
Google Assistant will be adding six new voices, by the way, and you won’t have to say “OK Google” or “Hey Google” every time now as there’s been concern it’s making kids sound rude and expect answers from humans like that they receive from Google. They even mentioned they’re implementing a “pretty please” command option. Does this mean Google will soon be training us on how to politely speak to one another? Social training from technology sounds about right, right?
You’ll also be able to order food from restaurants and cafes using Assistant, and Google says Assistant with be available in 80 countries by the end of the year.
Google has put a lot of its attention on AI, specifically predicting what you’re going to say next (they call it suggestive text and smart compose), so when you’re typing an email for example, you don’t have to write, no no no, Google will write for you. It’s not just the composing of emails and texts though, Google is betting on “suggested actions” on Google Photos making “smart sharing” easier. Basically, if you have a photo of someone (or you’re in a photo with someone) and click on it, Google will then suggest to send it to that person. It recognizes and suggests, zeroing in on content. Quicker recognition, less buttons to press and steps to take type deal.
Staying with Google Photos, an example of a photo of a cute kid was used. “We can make this photo better, drop the background, pop the colour, make the kid cuter…” and “maybe you have a memory in black and white, we can recreate the moment [photo] in colour and make ‘it real’.” This “making it better” verbiage is what Google speakers are touting, always. The feature is expected to roll out in the next few months.
Google’s smart displays from LG, Lenovo and JBL will go on sale in July, and this feature will function through actions like looking up recipes (faster and more efficiently), watching videos, calling friends and so on. You can keep check on your home or see in advance what travel is going to be like via Google Maps. “We’re combining voice and visuals together for a completely…” you get the drift.
Google News will “keep you up with news you care about, understand the full story and enjoy the support of the sources you love,” and is now available in 127 countries via iOS and the web.
New details around the Android P emerged at Google I/O. With Android P, apps will no longer be able to monitor your activity and actions for apps will come in shortcuts. They talked about ‘Slices’ as well as Android P’s adaptive battery, which will provide a more “consistent battery experience where the operative system adapts to your usage pattern, so it spends battery only on the apps and services you care about.” The phone also comes with adaptive brightness as “so often what happens is that you manually need to adjust the brightness slider, resulting in the screen later becoming too bright or too dim.” This feature is expected to learn your habits and preferences and then suggest changes. Over time these changes will then adapt so you don’t need to think, Google will just do it.
Google peeps were pretty pumped about the ML Kit (which includes image labelling, face detection, barcode scanning, landmark detection and smart reply) available on device APIs.
Google says they really care about our digital wellbeing too, guys they want to understand our habits, focus on what matters, help us switch off and wind down and “most importantly” says Google, find balance for our family. So, there will be more figures provided (how many times you unlock your phone in a day, for example) so that users get even more data about themselves. Does anyone care about knowing how many times they unlocked their phone in a day? Really though.
We were waiting for something about turning off and not being disturbed because clearly we cannot be trusted with doing these things ourselves. Google has the answer with ‘Shush’ which gestures to activate Do Not Disturb mode. All you do is place your phone screen-down on a flat surface and voila, you’re in DND mode. However, contacts that are starred will still be able to get through to you even in DND mode.
‘Wind Down’ will also, you guessed it, wind you down by diminishing the brightness from your phone. You can tell you Google Assistant what time you want to go to bed and the feature will automatically fade the screen, to help you wean off bright displays.
Maps is one of the most important augmented reality technologies ever created and a great example of how technology can help us interact more, not less, with the “real world.” https://t.co/3MzBvA6zeR
— Adi Robertson (@thedextriarchy) May 8, 2018
New features will allow users to add buildings and places to Google Maps, which could be useful for locations that don’t have formal addresses, in rural communities for example. There will be a ‘For You’ tab which will provide information that you need to know about the neighbourhoods you care about as well as your personal recommendations, what’s trending, etc. Google really wants Google Maps to feel more real so when you are looking at buildings their logos and signs pop up (in true AI form) so you get to see what’s out there in a more life-like manner but through your phone.
Of course, there were other features like ‘Style Match’ where you can zoom in on something (they used a lamp as an example) and then find out other items that match that style, helping you make decisions.
And, and they talked about Waymo, their fleet of fully self-driving cars. Apparently, Phoenix, Arizona will be the first stop for driverless service, launching later this year. So, you can order a car and it will pick you up, and you will sit in it, and there will be no human in the driver’s seat, and you will trust that this car will take you where you need to go without a hitch. Apparently, the fleet has self-driven more than six million miles on public roles so far, so that’s…reassuring?
Soon this will be commonplace.
It seems only right to be driving into Google I/O directly behind a Waymo self-driving car. pic.twitter.com/Ex0Q6BuXj2
— Clay Bavor (@claybavor) May 8, 2018
Just going to leave this here.