Security, Privacy, Performance, and Machine Learning were focal points of this year’s Google I/O event. Let’s dive into the key takeaways.
Last week, Google hosted I/O 2019 to update users and developers on all things Google including Chrome, Android, Google Cloud, and Tensorflow. Throughout the conference, each Google team released its latest platform updates and spoke on how developers are making use of their ecosystems and can make the best use of Google’s APIs and services.
In this post, we focus on Google I/O as a whole. If you’re looking for a deeper dive into Android Q, read our review of the beta release: Android Q is coming. Is your app ready?
Security & Privacy
The focus here is primarily on the end-user. Users will gain even more access to the data that Google captures as well as greater control over the flow of that data. Most impressively, Google will allow users to auto-expire data after a certain time period. These updates demonstrate Google’s actions in response to the recent public outcry over privacy concerns regarding data collection practices and its commitment to providing granular controls to its end-users.
In addition to the many changes coming to Android, Chrome is also taking steps to prevent websites from tracking and collecting information on users as they browse. Specifically, Chrome will start providing greater transparency behind cookies that are being used and allow users to control cross-site cookies. As a start, the Chrome team will leverage SameSite to call out cookies that are used outside of a same-site context. Expanding on this model, I can see Google asking sites to put some metadata behind its cookies as a means for providing more context as to why a site is using certain cookies.
One of the biggest splashes from I/O was the announcement of taking voice-processing from server-side computations and bringing it on-device. Not only does this bring about improved user experience, but it allows potentially sensitive data to remain on-device. This expands to Google’s home products, which will now store voice-fingerprints and facial recognition maps on-device. Overall, this brings long-term benefits for user privacy with the distributed model of devices handling unique user characteristics, expanding to cloud computations to improve models with anonymized data.
Users are accessing the web through Chrome (65% market share among desktop browsers) and interacting with apps on Android (75% mobile OS share worldwide). With both of these platforms seeing wide-scale adoption, particularly in bandwidth-constrained environments, each has made performance a central focus to growing the platform.
As a baseline, the Google team created web.dev, a reference guide for best practices for web development. The guide aligns developers around SEO, Accessibility, and Performance Testing. Lighthouse has become a strong tool to evaluate each of these qualities, so Google has added custom budgets, allowing testers to set specific goals for page load times, responsiveness, and other key metrics. In addition, Firebase has brought a beta version of their Performance Monitoring platform to the web, allowing developers to see how their web app is performing across varying user conditions.
This was center-stage at Google I/O, as Google continues to expand the power of Tensorflow and the tools that use it. In the past, Machine Learning faced a significant barrier to entry. Assuming a developer knew their way around the Keras API, they had to continue to train and adjust their models, eating up computational resources and time.
In addition to AutoML in Google Cloud, Firebase now allows app developers to easily train models and directly deploy them for use in-app. During the Developer Keynote, this was demonstrated by uploading a data set that simply correlated images of dogs to their respective breeds. Then, with a bit of demo magic, the model was built into an app that allowed the presenter to point their camera at a stuffed animal dog as the model tried to classify it as a Border Collie. For the common uses, Firebase continues to grow MLKit, providing free usage to on-device models.
Durham Google I/O Extended
Many thanks to American Underground and Momentum code school for hosting the local Google I/O extended session. Extra thanks to the local Google team in Chapel Hill for leading many of the sessions. The original Skia team has expanded to include some of the ARCore developers as well. This year, straddling the main keynote were two sessions.
During the ARCore session, Alan Sheridan and Patrick Reynolds showcased some of the apps that are leveraging the platform. In addition to numerous games and Google’s basic Measure app, some cool apps allow homeowners to try out the looks of new paint colors, and furniture shoppers to place new pieces inside of rooms. Most impressive are the improvements to the underpinnings of ARCore. Surface Recognition is drastically improved. In the past, users had to pan back-and-forth, looking for the perfect surface on which to draw. Now surfaces appear almost instantly upon starting the camera. Facial feature tracking works very well, and you can try it for yourself with Google’s Augmented Faces Demo.
Google also announced that AR has expanded into search. 3D models are now included in search results and can now be placed onto a surface in local-space for first-person manipulation and exploration. Maps also expands the AR walking directions onto Pixel devices, where the feature has been in beta with Maps Local Guides program.
Archive Social’s Evan Halley led a codelab on Puppeteer. This powerful tool that provides APIs to control Chromium is what allows the Archive Social team to scrape data from social sites. In Halley’s demo, grabbing text and images from HTML components proved easy. Puppeteer also includes device definitions for several smartphones and tablets, which would provide an awesome way to quickly test-view layouts of website pages across these various form-factors. This API ties nicely with Google’s demo of “Duplex on Web,” where the Puppeteer API can be used to fill in and control website components on behalf of a user.
Google is continuing to expand its services to meet developers’ needs and bring performant experiences to users. For Smashing Boxes, and our clients, these announcements ensure that we’re building apps that are quick to load for users, make use of best practices, and are easy to index by Google Search.
We’re excited that Machine Learning can now be utilized by more of our development team, allowing us to build models that are more hands-off but deploy them easily across mobile and web. MLKit is already providing value to our mobile team.
Watch the full Google I/O developer keynote below.
This article was written by Matt Wood, a principal engineer at Smashing Boxes. When he’s not writing code or blog posts, his ambitions look in the direction of IoT and FinTech.