I was recently introduced to Spark Core – a diminutive Arduino clone with integrated WiFi and extensive cloud support.
These are some observations I made while porting some Arduino 433Mhz RF code to it.
If one for some reason lacks WiFi (the integrated on-chip antenna offers quite good coverage) user code will in general never execute. The core will try for-ever to connect to the Spark Cloud while blinking a rainbow of confusing colors. The remedy is either to setup WiFi immediately (and give up on the thought of using the core in an un-connected none Internet-of-things setting) or use a recent addition to the Spark API:
The semi part of the system mode will make the core refrain from attempting to connect the cloud.
Spark Core ships with “special” bootstrap firmware, named tinker, that is factory-installed on all cores. If one, like me, skips the “newbie”-only step-by-step guide and immediately flashes over this firmware with something of ones own creation it’s no longer possible to get the core into the “Listening mode” required to set it up with credentials to the local WiFi network. How-ever it’s easy to reflash with tinker.
The core’s singular color status LED is a train wreck. It’s both in many cases the (only?) key to unlock what the core really is up to and, at the same time, painfully hard to decipher. I mean – what’s the difference between pulsing and “breathing”? Can you differentiate between light blue and cyan in a well lit room?
When porting Arduino code I stumbled on some interesting incompatibilities:
- The Ardunio binary literal syntax, e.g. B01010101, is unsupported. It’s recommended to use gcc syntax, e.g 0b01010101, instead.
- The Spark Core has a 32 bit CPU while much Arduino code tends to be 16 bit, this can cause problems with code that does bitwise operations (a perennial C language favourite).
- When the Core is connected to the cloud one can expect some latency outside of the main loop function (that’s where WiFi system code is run). This can cause problems with timing-sensitive Arduino code.
- Debugging by means of logging through the serial interface can be tricky – serial communcation appears to be seriously asynchronous even when data is explicitly flushed.
Spark Cores are quite cool and small. 🙂
OAuth is a simple standard for allowing an end user to authorize an application to access a third party service on behalf of said user.
Access is authorized on two levels by the third party as the application needs to be identified as does the the user on behalf of which it is acting.
The application obtains through an out-of-band channel, typically a web form at the third party service where the application developer submits an application for access, the following pair of credentials:
- Consumer key (a.k.a. API key, public key, application key). Transmitted to third party as
- Consumer secret (a.k.a API secret, private key, consumer secret key, application secret)
The consumer secret is never directly transmitted to the the third party, as it is used to calculate a signature for requests.
The user typically authorizes the application to access the service using a 3-Legged OAuth process whereupon its completion the application obtains an access token consisting of:
- Token (a.k.a. access token). Transmitted to third party as
- Secret (a.k.a. access token secret, oauth token secret).
The secret is never directly transmitted to the the third party, as it is used to calculate a signature for requests.
OAuth authorized http requests to the third party adds several OAuth specific parameters the most important of which are
oauth_signature. The value of
oauth_signature is a SHA1 calculated hash of the consumer secret, access token secret and all the parameters sent in the request. OAuth parameters can be sent as standard URL parameters or as the value of the
Authorization http header.
I’ve long been looking around for some sort of internet connected home host for purposes such as experimenting with web application technologies and backing up source code. After much consideration I finally decided on Mosso – which promised affordable fully featured virtual machines in their cloud.
I’ve never looked back since. The spin-up of the machine was a breeze, the installation of my preferred Linux distribution quick and for only about $10/month I had root access to my very own private Ubuntu server, with its own IP-address. Motto provides a handy management console for such tasks as taking backups of the instance, fiddling with the DNS configuration and migrating between their different plans. Initially I went for the dirt cheap basic plan which goes with 256 of RAM, 10GB of HDD and a guaranteed 1/64 part of the CPU power of a 2-way quad core server. Memory demands stemming from Spotifun – a Grails based web application I’ve currently deployed to the instance – forced me to go one step higher on the plan ladder: 512 of RAM, 20GB of HDD and a 1/32 part CPU guarantee for ~$20/month. The migration was trivial, my virtual hard drive was simply grown and after a OS restart I was set.
Competitor such as Amazon EC2 offer more assuring CPU-sharing guarantees but appear much more expensive than Mosso, an important consideration for my mostly hobbyist endeavors. Mosso’s paltry CPU guarantees have in practice never troubled me, CPU-bounded work-loads typically flies. However I cannot judge out the possibility that I was simply lucky and got deployed to a lightly loaded physical server, the backdrop to this posting is only my personal experience with a single instance.
The only negative thing I want to add is that Mosso hosts its servers in the USA, causing an unavoidable round trip time of up to 200 ms for Europeans such as yours truly. The lag when hacking in a SSH console is acceptable for me, I’m not a fast typist anyway. 🙂 I’m not aware of any other cloud hosting service which currently allows the customer to decide the continent where the instance is to be hosted. However, the rumour mill has it that Microsoft Azure will have this feature when they are out of beta.
Mosso is since a couple of months back the Rackspace cloud.