Shortly after Maximilian was born, I decided that he should get a mobile to place over his crib. I knew what theme I wanted. I wanted a space mobile. I looked all over, but couldn’t find any in stores, and I sure as hell wasn’t going to spend 90 bucks on one from Etsy, so I decided to make one myself. Normally, these things are made out felt, but not having a sewing machine, decided make Maximilian’s out of paper.
Continue reading →
Two related links, both involving using your phone to shoulder surf your passwords. Both attacks take advantage of the fact that smart phones with accurate accelerometers are now ubiquitous. By monitoring the the vibrations of the phone, the attacks inver what keys were pressed on a keyboard. Both of these a much more proof of concept, than actual sophisticated attacks, but they are interesting none the less.
At HOTSEC 11, Liang Cai and Hao Chen of UC Davis were able infer which key was pressed on an onscreen keyboard with 70% accuracy. By measuring how far phone was torqued around both the X and Y axises, the the location of where force was applied, and thus which key was pressed can be inferred. Cai and Chen made the task a bit easier for them. They held the phone in landscape mode, which spread the keys out more, thus causing a larger distribution of torques that could be measured. That’s not necessarily a problem since many people type in landscape mode. The bigger simplification was that they only looked at a touches on the dialing pad. A more interesting paper would have looked at attacking the alphabetical keyboard instead. I understand why they didn’t. The experiment was to find out if someone could use the accelerometers to read key presses at a high enough accuracy. Looking at their confusion matrix, I would think that determining alphabetical keyboard presses would need to be a two step solution. First, you’d get a distribution of what key was pressed. You’d then combine these presses with a Markov Chain language model to determine what the actual keyboard press was. “it was the durst of timez” becomes bit more Dickensian, a little less crappy rap-rock, and a lot less monkey.
Of course, sniffing the phone’s keyboard is one thing, figuring out what someone is typing on their laptop or desktop is something else, but that’s exactly what
Philip Marquardt and others at Georgia Tech did. In their work published at CCS 2011, they describe a technique where a phone placed next to keyboard read key presses via vibrations on the table at 80% accuracy. Unlike the method above, this team used a dictionary to increase the decoding accuracy. Their method feels the vibrations through the table and then attempts to categorize the key being on the left or right side of the keyboard (assuming the phone is placed to the left of the keyboard). Pairs of key presses are read, the distance between the first and second key of each pair is categorized as being either “near” or “far”. These triple are then passed through the dictionary in order to figure out what is the most likely English word typed. Left-right and near-far categorization is done using a neural net.
via Security News Daily,
Wow, just last week I singled out Microsoft Kin as an interesting idea. Yesterday, The Kin died.
Wow. I can really nailed that one. Apparently, the reviews were rather poor for it, in all fairness, I wasn’t thinking much about the phone, just the UI. The visualization of the lifestream was what was interesting, and there’s no reason why this idea can’t be applied to some other product.
Update: Wed Jul 7 15:13:28 PDT 2010
Microsoft sold 503. Ouch. Well I didn’t buy one.
Update: Fri Jul 9 01:28:50 PDT 2010
Thoughts from the guy who killed the Kin.
Schuresko one time mentioned using shared artifacts for collaboration and social network interaction. Instead of simply just clicking buttons in lists, users would manipulate representations of the activities/messages more like how one drags icons around on a desktop. He mentioned OLPC’s Sugar interface, and how other OLPC users show up as icons on the home screen, complete with icons indicating what activity they are currently engaged in. Since many of the OLPC applications are collaborative, clicking on a user’s icon will crate a shared session with him/her. Also, when users are collaborating, their icons appear huddled around the same activity icon.
I hadn’t seen anything like that interface before, especially deployed outside of a lab. I thought about that recently when trying to simply share a folder on my computer with Ming’s was an exercise in frustration. (Either we couldn’t log in to each other’s machines, or the network link would collapse immediately after beginning the transfer.)
Later, I saw an ad for Microsoft’s Kin phone. It interface (shown above, you have to click around on their link unfriendly site to find the video yourself) seems pretty novel. The user is initially presented with a graphical life stream. From this, they can drag items down to the area immediately below the stream (the “Kin Spot”) to share them with people in their address book. Again, destinations are selected from a graphical stream and dragged to the spot. Tapping the spot allows the final message is edited and then sent.
It would be interesting to create an interface like this for Diaspora, if that ever gets off the ground.