Google I/O Device Lab

July 7, 2014

One of my highlights from Google I/O this year was the Device Lab that Matt Gaunt and I built to show developers how their site looks across the multi-device web. It was a really cool thing to see all kinds of different sites working on phones, phablets, tablets, computers and even TVs.

A few folks have asked how we set things up, and how we did it, so I figured I’d document our process here!

Device selection

We ended up with 46 different devices on the wall, including:

We picked our devices mostly from Google Play Edition devices, and picked a few other fun and shiney net devices that would look cool on the wall. If you’re considering building something like this for your company or team, look at your analytics to understand what your users are using, and then regularly add new devices as the usage changes. In most cases we had at least two of every device, so that we could have one in portrait and one in landscape.

Network connection

When we ran the 46+ devices in the office prior to I/O, everything ran beautifully, but we knew with all the attendees, each with at least one, potentially two or three devices, the network would be a bit of a challenge.

We had hoped to use OTG Y-cables to both power and connect the devices to a wired network, but the cables we got gave us a network connection, but no power to the device, which means the screens wouldn’t stay on. So, at I/O, we connected all of the devices via WiFi to a dedicated access point with it’s own SSID, that way we could ensure that the devices were connecting to that access point, as opposed to one potentially several hundred feet away or even on another floor.

We tweaked a few settings on the Android devices to optimize the network connection. For example we disabled Avoid poor network connections and disabled Wi-Fi optimization to keep things working as well as possible.

Power

We kept all of the devices powered via USB at all times so that we could keep the screens on and also didn’t have to worry about recharging things. Rather than using the individual wall warts, we picked up a bunch of Anker 40W 5-Port USB power supplies. At full screen brightness, the N10s suck as much power from the USB port as they use, Pogo cables provided more.

Screens

To keep the screens on and to prevent dimming, we had to enabled Stay awake in the Developer options panel, that ensured that was long as they were plugged in, the screen would stay on. We also installed Keep Screen On LITE, which prevented the screens from dimming after a period of time.

Attaching devices to the wall

This was the sort of easy part, we just used good ol’ velcro! To get the cool looking pattern, we cut out stencils of each device using colored paper, then taped them to the wall and kept rearranging them until we got the look we wanted. Once we knew where everything was going to go, then we started sticking them up.

Pushing URLs to the devices

The back end that sits on Compute Engine, and runs a little node app that pushes URLs out to the devices using Google Cloud Messaging. You can grab the source for the Mini Mobile Device Lab from GitHub and give it a shot yourself. Big props to Matt for doing most of the work here. We have a few ideas on how we can make this easier to set up, and potentially allow you to run it without any kind of back end infrastructure. More on that later ;)

Android Devices

Each device has a simple Cordova app running on it that listens for a Google Cloud Message, when it gets that message, it fires an intent to start the browser and open the URL in the message. This meant we can not only open Chrome, but any browser that’s installed on the device, Firefox, Opera, Android browser, etc.

With 40+ Android devices connected, we found out that the Play store gets a little cranky when there are too many devices connected to the same email address, so we ended up have to create three of four different email accounts that we used to sign into the devices and install the software.

iOS and ChromeOS

Since the software we used depending on Google Cloud Messaging and an app that fired Android Intents, we had to roll a different solution for non-Android devices. We wrote a simple AppEngine app that used the Channel API to push messages out to the devices. On the device, we opened a “background” page that listened for a message/URL and then simply did a window.open().

Android TV

We built a custom one-off WebView app that had two WebViews running, one that was always hidden, connected to the AppEngine app we used for iOS and ChromeOS, and the one that showed the URL sent.

  • #web
  • #multi-device
  • #mobile
  • #mobileweb


Raspberry Pi Quick Start

August 27, 2013

Last night, I needed to re-image the SD card for my Raspberry Pi to get things setup from a clean state. It’d been a few months since I initially did it, and couldn’t remember exactly what I’d installed, or what config changes I’d made, so I figured I’d document things a little better this time. So, here they are. I’ve pushed all of the scripts and config files up as a few Gists on GitHub to make it easier to edit or change them later.

Note: If you don’t see the scripts appearing inline, try refreshing.

Step 1 - Create the initial disk image

  1. Download the latest Wheezy image from http://www.raspberrypi.org/downloads
  2. Create the image

  3. In RaspConfig, set:

    • Expand the File System
    • Change system password
    • Set localization options, including locale, timezone, and keyboard layout
    • Set machine name
    • Enable SSH

Step 2 - Setup the Wireless Network

  1. Log in as pi
  2. Run: sudo nano /etc/network/interfaces
  3. Replace the existing content with

  4. Reboot

Step 3 - Create the primary user account

  1. Login as pi
  2. Run curl https://gist.github.com/petele/6346707/raw/create-user.sh > create-user.sh && chmod u+x create-user.sh && ./create-user.sh

Note: You’ll probably want to fork this file since you might not want your user name to be pete ;)

Step 4 - Install & Configure Software

  1. Login as newly created user
  2. Run curl https://gist.github.com/petele/6347546/raw/go.sh > go.sh && chmod u+x go.sh && ./go.sh

Note: You’ll probably want to fork this file since you probably don’t want my GitHub config settings ;)

This script will download the config files from https://gist.github.com/petele/6347546 As part of the setup it will:

  • Update all current software
  • Install new software including Lynx, Apache, VSFTPD, Avahi, Python Setup Tools, OpenSSL, RPIO, sleekxmpp, requests and a few others
  • Configure Git
  • Configure Avahi
  • Enable autologin on the console and run ~/login.sh for every user at login
  • Configure VSFTP, Apache (though it doesn’t properly configure SSL yet), etc…
  • Seriously loosen security settings in Lynx - I need this for the Google Voice stuff in my Home Automation stuff, so use this piece with extreme caution!

There you go - you’re Raspberry Pi disk image is ready to go!

  • #CodeSample
  • #pi
  • #home automation
  • #raspberry pi
  • #getting started


A web UI for my Pi

August 14, 2013

My project this weekend on my home automation system was two-fold, first I wanted to clean up the code and make it a bit more object oriented, but I also wanted to add a web interface that is accessible outside my apartment.

The largest part of the weekend was spent re-architecting things. Now, each component is effectively self-contained, so it will be easier to add or remove components later and making it easier for other people to use. Once that was done, I dug into the web interface. The Pi does a POST to an AppEngine app every 30 seconds (configurable) with the status of all of the devices lights, air conditioners, door, even the Harmony remote. Since the data changes often, is less than about 10k and I don’t need it stored for any considerable length, I decided to just store it in memcache to make retrieving it faster.

On the client side, I wanted to emulate the look of the Targus Keypads that I’ve got throughout my place acting as light switches, which is why you see the layout as it is. Across the top is the status of the different devices, for example the state is AWAY and the front door is closed. Both the living room (LR) and bedroom (BR) air conditioners are off, the temperature inside and out is 81° and the amp is off.

The buttons control things in the apartment, the red and blue buttons are modifier keys that affect the gray buttons. For example, pushing Off then Kitchen turns the kitchen lights off. The plus and minus keys only affect the air conditioners right now, though they used to also dim the lights. The buttons show as depressed when that item or set of lights are turned on.

Right now I’m simply doing an XMLHTTPRequest on a setInterval to refresh the data, but I’m planning to modify it to use the Channels API in the near future which will help to eliminate some of the existing lag. I’m also trying to decide if it’s worth adding a live webcam view, not sure I’d really use it, so I’m thinking not, but who knows.

The next part of the project is to solder up the Adafruit RGB Negative 16x2 Keypad Kit and turn that into the alarm clock beside my bed. Not only would it wake me up in the morning, it would also turn the lights on and turn the stereo on to FM radio.

  • #mobile
  • #home automation
  • #hue
  • #raspberry pi
  • #hue api
  • #appengine


Home Automation For Geeks

July 9, 2013

I’ve always had a fascination with home automation systems, things that make your life easier and computers that do the stuff that I’m too lazy to do. In college, I had my tiny little apartment in Ottawa all wired up with X10 and this weekend, I “finished” my most recent creation. Though honestly, is it ever really done?

It all started a few months ago when I picked up a set of the Philips Hue light bulbs - they’re amazing. LED light bulbs that are fully addressable and programmable via a simple to use REST API. The biggest problem I had with them was that to really use them, you had to leave the light switch On and turn the lights on and off via the app. But it gets to be a small pain in the butt if you have to pull your phone out of your pocket every time you want to turn a light bulb on or off.

The Kit

Okay, so what’s it do?

The Raspberry Pi is effectively the brains of the apartment, it keeps simple state, and sends commands to the lights, the iTach and the Harmony hub. The keypad are placed throughout the apartment and act like multi-function light switches. The zero key and the enter key have special meaning though, either putting the system into Away mode or Home mode. Away mode is just a simple macro that turns all of the Hue lights, uses the iTach to turn off the air conditioners off, shuts the TV and stereo off via the Harmony Hub, and then waits until the front door opens again. When the front door opens, a magnetic door switch saves me from having to hit the Home button which runs another simple macro to turns the lights on, and depending on both the inside and outside temperature, turns the air conditioners on. Oh, and it also turns one of the lights in the living room purple when I have an unread message on Google Voice.

Building out the system

Building out the system, some parts were easier than others. The API for the Philips Hue lights, awe-some! The iTach to control the air conditioners, it was good once I figured out how to teach it IR commands. Google Voice, yah, there’s no API there - that required a little thinking. And the Harmony Hub, there’s no published API for the Harmony Ultimate Hub, and wow, that one sucked.

My original plan was the write a Chrome Packaged App to handle the lights, and run a few USB numeric keyboards around my place. I figured the cost of leaving a Chromebook running 24x7 would be acceptable with the energy savings I was getting from the lights. But, I kept hitting a single and pretty simple problem, I couldn’t keep the Chromebook from locking. I could prevent it from going to sleep, but it still locked, leaving the the USB keyboards useless, since all they could do was type passwords into the machine. So, I pulled the Raspberry Pi I ordered months ago out of a drawer and started fiddling; within a few hours I had a working prototype - I was stoked.

The un-official Google Voice API

To be clear, there isn’t a Google Voice API available to developers (boo!), though there are a few good libraries out there that are worth checking out. Sadly, if you have two factor authentication turned on, none of them work since they depend on sending your username and password to Google and doing some unholy magic to log in. And if you don’t have two factor turned on, please go turn on two factor authentication RIGHT NOW. I’ll wait. No seriously, I’ll wait.

I was pretty resigned to not being able to integrate a new message indicator to the system after spending a few days trying to figure out if there was any way possible around the two factor stuff, or if I could somehow figure out how to make a web request with the right cookies. That is of course, until I was reading the history of browsers, and was reminded of Lynx, the first browser I used. Did it still exist, would it work? Would it run JavaScript? The answer is yes! Sure enough, I installed Lynx on my Pi, and then tried logging into Google Voice. I figured if I could log in, I should be able to somehow scrape the results. Sure enough, it worked. Now to figure out how to scrape some results!

After a little searching and some Chrome DevTools digging, I found Google Voice has a JSON end point that will give you a simple JSON object with message counts:

https://www.google.com/voice/request/unread

{
  "unreadCounts": {
    "all": 3,
    "inbox": 1,
    "missed": 0,
    "placed": 0,
    "received": 0,
    "recorded": 0,
    "sms": 0,
    "spam": 28,
    "trash": 0,
    "unread": 1,
    "voicemail": 3
  },
  "r": "SomeMagicCodeHere"
}

And BOOM I was off! Unfortunately, it means I have to fork a process, start Lynx, request the URL and then parse the result every time I want to check if I have new messages. And my cookies do expire so I have to log back in every so often to re-authenticate, but it’s better than not having any reporting at all!

IP to IR with the iTach

A while ago, I came across the iTach IP 2 IR controller, it’s an interesting little device, I think mostly meant for high end home automation systems, but it wasn’t that expensive and I figured I’d give it a go. It’s pretty simple, it has a network jack, and three 1/8” jacks on the back. The 1/8” jacks connect to IR emitters that you can either place in the immediate vicinity of a device, or connect an IR blaster and just put in the room. The manual is pretty thorough for it, except the left out all of the important intro stuff, like the difference between a blaster and an emitter, or where the IR learning port was. Oh, and they don’t have a Mac app, so you need to grab one from a third party app to learn commands.

Once I got this guy somewhat figured out, the rest was pretty easy. It sends out regular UDP multicast packets so you can find it on the network, and then you communicate with by opening a TCP socket and sending an IR command to one of the three IR ports.

For example, to set the bedroom air conditioner to 72°, you’d connect to the iTach on port 4998 and send:

sendir,1:3,1,37993,1,1,319,160,21,61,21,21,21,21,21,21,21,61,21,21,21,
21,21,21,21,21,21,21,21,61,21,21,21,61,21,21,21,21,21,21,21,61,21,61,
21,21,21,61,21,21,21,21,21,61,21,21,21,61,21,21,21,21,21,61,21,3799

The sendir part is pretty self explanatory, followed by 1:3 which tells the iTach to send it to the third IR port on the device. I guess some of their devices can have multiple addresses, explaining the first 1. If all goes according to plan, it should then respond with:

completeir,1:3,1

I mentioned the system turns the air conditioners on depending on the inside and outside temperature. I used the DS18B20 temperature sensor from Adafruit and followed their awesome tutorial for setting it up. For the outside temperature, I check a weather station in Brooklyn (it’s appearently closest to my place) via WeatherUnderground. Their API is free to use and super simple if you’re just using it for a personal project and not hitting it very hard.

Harmony Hub API

The crowning achievement came this weekend when I figured out how to query and address my Harmony Ultimate Hub. Logitech doesn’t make an API available to developers, and in some ways, I don’t blame them - Harmony remotes are pretty complex and there’s a lot of state and other stuff involved. But that wasn’t going to stop me.

If you’re not familiar with the Ultimate, it’s pretty sweet, not only does the remote control everything, there’s a little ‘hub’ that sits in your living room and allows you to use your phone or tablet as an additional remote anytime you’re on your network.

Sadly, searching for Harmony Hub API at the time revealed nothing useful, I tried up, down and ten ways to Sunday to see if anyone else was trying to do what I did. I couldn’t imagine I was the only one! But nothing. So I did what any developer would do, first I port scanned it (it’s got an open port on 5222, and 8088). I tried my damn’est on port 8088, it responded to HTTP POST requests only, but I couldn’t ever get a useful response. Then I hooked up a packet sniffer and tried to see what was going on with the app. Nothing. Nodda.

Ah right, the wireless network I have setup, was preventing my Mac from seeing packets sent from my phone to the hub. Grrr! Okay, share the Mac’s network and then try again. This time it can’t find the hub. Right, different subnet. Long story short, connect to hub, switch network, now I can see a few packets. Great, so now I can see a few packets, let’s do a few searches to see if anyone has posted about:

vnd.logitech.harmony/vnd.logitech.harmony.engine

Again, BINGO! A GitHub repository called pyharmony, complete with a great protocol guide and working code. The API uses XMPP, which makes sense when you figure the hub potentially needs to update multiple devices with it’s current state in near real time. While I would have much preferred an REST API, I figured I could work with XMPP.

I grabbed the code, installed the pre-reqs, then ran it. Queue sad trombone sound. Didn’t work, well, it connected but then hung while trying to get the session token. I went back and forth with the other developer a few times, compared outputs, and then realized, we were dealing with different hubs.

So this weekend, back to the WireShark I went, this time capturing everything, and sure enough, the login credentials are subtly different, once I updated the code and used the correct login credentials, it worked like a charm. Sadly, my credentials don’t work on his Hub, so we still need to figure out how to do proper device detection and use the appropriate credentials.

You can grab my forked code for pyharmony at https://github.com/petele/pyharmony/, which includes the credentials for the Harmony Ultimate Hub. If you’re using one of the older Hubs, grab Jeff’s code. I’ve also added a few additional functions to my fork that aren’t in the original, including getCurrentActivity and startactivity.

In my home automation system, the Away macro and the Bed Time macro both check the current activity and if the system is on, turns everything off by calling startactivity with the activity ID -1. The getConfig API returns a JSON object with all of the info about your system, including the activity IDs for everything you’ve programmed into your Harmony. Obviously I could add a bunch more functionality to this, but that’s for another day.

Energy Efficient

Talking to some of my co-workers about the system, they were a little concerned about the power consumption of leaving the Pi on all day every day and if that would eliminate the benefit of the Hue lights. As I was wrapping things up this weekend, I pulled out my trusty Kill-a-watt, and did some quick measurements. The Pi uses only about 2 watts, 3 if it’s really pushing it. The lights are only about 8.5 watts each, and since the air-conditioners only come on when needed, I’m being smarter about power there too! So overall, I’m pretty sure that this will save me a little money in the long run.

Home Automation for Everyone

There’s a whole bunch more that I want to do with my system, for example a morning alarm, connected to my calendar to make sure I get up in time for any meetings. It could turning the lights on, tune the radio to my favorite station gently turning the sound up (until I get out of bed), the possibilities are endless.

And with the price of hardware like the Raspberry Pi, door switches and such, anyone with a little geek know how can put together a pretty awesome system. Feel free to grab my code and rip it apart and do your own thing with it, it’s not exactly pretty, but I love the fact my apartment welcomes me home at night and says Goodbye when I leave.

  • #pi
  • #home automation
  • #hue
  • #itach
  • #raspberry pi
  • #google voice api
  • #harmony hub api
  • #logitech api
  • #harmony api
  • #harmony ultimate api
  • #gvoice api
  • #hue api


High DPI: Tips and Tricks

May 22, 2013

During the presentation that John Mellor and I did at I/O this year covering building beautiful websites for high DPI displays, we summarized our talk into about 7 key points.

If you follow these seven simple guidelines, you’ll find your site looks great on any high DPI display.

  • Setting width=device-width means you only have to care about device independent pixels
  • If you don’t set the viewport to width=device-width, or if you use a fixed width, you’re in a world of hurt.
  • The devicePixelRatio on high DPI devices can range from 1.3 to 3 and is about more than just phones or tablets, there are laptops too!
  • Use vector images wherever possible
  • Use @media queries to specify appropriate background images
  • Highly compressed 2x images work well in many cases
  • For sharp canvas images, beware of webkitBackingStorePixelRatio

You can find the video on YouTube at http://youtu.be/alG-UwRWV_U, and we’ve also posted the slides at http://goo.gl/j5Z5W.

  • #Conferences
  • #Web Design
  • #BestPractices
  • #mobile
  • #viewport
  • #io13
  • #highdpi
  • #retina