Hacking Amazon Alexa with Java

For the recent AT&T IoT Hackathon in Dallas, we decided to try something new and make an Amazon Echo Dot a central part of our project. Our project used a Raspberry Pi with a camera to detect when the lever on a coffee airpot is pushed down, and capture a picture. We then fed the picture through IBM Watson for facial recognition, and wrote the name and the image to an S3 bucket.
This is where Alexa took over. I wrote a Amazon Lambda function in Java which read the S3 bucket and exposed two intents. The first was to ask “who took the last cup?” The function would respond with the name, which came for a text file in the S3 bucket. The second intent was more fun. You could then tell Alexa to “shame them”. This posted a Tweet with the image of the person and a caption saying they took the last cup of coffee.

We actually got this all working in a day. I handled the Alexa side of the project, while my teammate handled the Pi and Watson. The biggest challenge was figuring out how to actually get Lambda and Alexa playing together nicely using Java.

Amazon produces a lot of doc about Alexa, and about Lambda, but very little deals with using the two of them together with Java. Most of the examples are for NodeJS. There were a lot of tutorials out there using NodeJS, most of mixed quality. In the interesting of improving the situation for us Java developers, I’ll share my lessons learned and walk through how to get this setup.

For the TL;DR crowd, you can grab the project source off my GitHub project and be sure to look at the examples in the Alexa Skills Kit Java SDK.

Creating your Project

First off, ignore all the “Using Lambda with the Eclipse SDK” tutorials. You do not want to do this as you’ll just be wasting your time. You need to be using the Java Alexa Skills Kit SDK. The jar is available in Maven Central, and all the source is in the GitHub repository. More importantly, the SDK includes numerous examples for how to use the SDK. For working with Alexa and Java, reading the source is the only reliable option.

Ultimately, Alexa cares about JSON payloads. The Skills Kit SDK is essentially a bunch of wrapper classes around the JSON exchange between Lambda and Alexa. This is the reason the other tutorials you’ll find don’t work with Alexa. You can’t have a Lambda that simply takes a String and returns a String. You need to implement a Speechlet, which takes a SpeechRequestEnvelope and returns a SpeechResponse.

For the initial project structure, I used Gradle. Since I’m talking to S3 and Twitter, I also have dependencies for those. You can trim them out if you’re not using them for your own project.

group 'org.sporcic'
version '1.0'

apply plugin: 'java'

sourceCompatibility = 1.8

repositories {

dependencies {
    compile 'com.amazon.alexa:alexa-skills-kit:1.2'
    compile 'com.amazonaws:aws-lambda-java-core:1.1.0'
    compile 'com.amazonaws:aws-lambda-java-events:1.3.0'
    compile 'com.amazonaws:aws-lambda-java-log4j:1.0.0'

    compile 'com.amazonaws:aws-java-sdk-s3:1.11.56'
    compile 'org.twitter4j:twitter4j-core:4.0.5'

    compile 'log4j:log4j:1.2.17'
    compile 'org.slf4j:slf4j-api:1.7.0'
    compile 'org.slf4j:slf4j-log4j12:1.7.0'

task buildZip(type: Zip) {
    baseName = 'coffeeStatus'
    from compileJava
    from processResources
    into('lib') {
        from configurations.runtime

build.dependsOn buildZip

line 6 : make sure you set your sourceCompatibility to 1.8, as Amazon Lambda uses Java 8
lines 13-16 : these are the core Amazon Lambda and Alexa SDK libraries. You need them.
lines 18-19 : I need these since I’m talking to S3 and Twitter. Remove them if you aren’t
lines 21-23 : the logging libraries you’ll need for S3
lines 26-35 : to deploy Java to Amazon Lambda, it has to be packaged as a zip file, with all the dependencies inside a
directory called lib inside the zip file. This Gradle task takes care of that for you, and adds the tasks onto the normal build task.

This is all the Gradle file you need to write a function for Amazon Lambda. You can add additional dependencies depending on what you’re trying to do. You will upload this jar via the Amazon Lambda management console.

Now you need to create your SpeechletRequestStreamHandler implementation. This is a pretty simple class:

package org.sporcic;

import java.util.HashSet;
import java.util.Set;
import com.amazon.speech.speechlet.lambda.SpeechletRequestStreamHandler;

public class CoffeeStatusSpeechletRequestStreamHandler extends SpeechletRequestStreamHandler {

    private static final Set<String> supportedApplicationIds = new HashSet<String>();

    static {
        String appId = System.getenv("APP_ID");

    public CoffeeStatusSpeechletRequestStreamHandler() {
        super(new CoffeeStatusSpeechlet(), supportedApplicationIds);

line 7 : name the class what you want, but you’ll use the fully qualified name of this class in the name of the handler in the Lambda configuration
lines 12-13 : the Skills SDK has logic to verify the application ID of the caller to the Lambda function. Rather than hard coding the application ID of the Alexa Skill in code, I ready it from an environment variable configured in the Lambda Management console.
line 17 : you need to implement a no-arg construction which calls super() with an instance of your Speechlet and the Set of your authorized application IDs

One final piece of setup is to create a log4j.properties file in the src/main/resources of your project. This is necessary to use logging inside of your Lambda function. The file needs to contain this configuration:

log = .
log4j.rootLogger = DEBUG, LAMBDA

#Define the LAMBDA appender
log4j.appender.LAMBDA.layout.conversionPattern=%d{yyyy-MM-dd HH:mm:ss} <%X{AWSRequestId}> %-5p %c{1}:%L - %m%n

NOTE: Be sure to change the level of the rootLogger before you go to production!

Now comes the fun of implementing your Speechlet. Like a Servlet, the Speechlet interface defines the lifecycle methods for handling requests from Alexa. I inspired my code from the Helloworld Speechlet in the Skills SDK. The primary difference is I used the new SpeechletV2 interface.

The SpeechletV2 interface defines four lifecycle methods Alexa will use to interact with your Lambda function:

public interface SpeechletV2 {

    void onSessionStarted(SpeechletRequestEnvelope<SessionStartedRequest> requestEnvelope);

    SpeechletResponse onLaunch(SpeechletRequestEnvelope<LaunchRequest> requestEnvelope);

    SpeechletResponse onIntent(SpeechletRequestEnvelope<IntentRequest> requestEnvelope);

    void onSessionEnded(SpeechletRequestEnvelope<SessionEndedRequest> requestEnvelope);

The primary method you’ll interact with is the onIntent() method. Here’s my implementation for Skill with two intents:

    public SpeechletResponse onIntent(SpeechletRequestEnvelope<IntentRequest> requestEnvelope) {
        log.info("onLaunch requestId={}, sessionId={}",

        Intent intent = requestEnvelope.getRequest().getIntent();
        String intentName = (intent != null) ? intent.getName() : null;

        if ("CoffeeStatusIntent".equals(intentName)) {
            return getCoffeeStatusResponse();
        } else if("ShameUserIntent".equals(intentName)) {
            return tweetTheShame();
        } else {
            return getUnknownCommandResponse();

lines 3-5 : just shows logging is handled the same as about every other application, along with how to get the request and session IDs
lines 7-8 : you get the Intent off the request, and can get the actual name by calling getName() to decide what you’re going to do. These are the same intent names defined in the interaction model in the Alexa Skill Kit configuration.
lines 10-16 : I evaluate the String value for the Intent and call another function for each intent. I also have a fall through function which returns a generic unknown command response.

Now lets walk through one of the functions that builds the SpeechletResponse:

private SpeechletResponse getWelcomeResponse() {
        String speechText = "Welcome to Coffee Status";

        SimpleCard card = new SimpleCard();
        card.setTitle("Coffee Pot");

        PlainTextOutputSpeech speech = new PlainTextOutputSpeech();

        return SpeechletResponse.newTellResponse(speech, card);

lines 4-6 : while the Echo’s are voice devices, Alex also had the mobile application. The cards (SimpleCard and StandardCard) define what shows up in the Alexa application as a result of the voice interaction. The SimpleCard only displays text, while the StandardCard provides the ability to include an Image.
lines 8-9 : this is where we define what gets said back to the user via Alex
line 11 : now that we have the Card and the OutputSpeech, we use a static factory method on the SpeechletReponse to build the response. The response can either be a “Tell” response, which simply states the OutputSpeech text, or an “Ask” response, which says the OutputSpeech and then prompts the user to provide additional information which can continue the user’s session.

The Intent provides access to the Slots data, which were defined in the Alexa Skill interaction model. The History Buff example in the Alexa Skills SDK is an excellent example of how to get data from the slots and have an interaction with the user.

Once all the code is ready, do a standard ./gradlew build to generate the zip file for upload to the Lambda Management console. The zip is place in the build/distributions directory of your Java project.

One final note: the SDK lays down a pattern for adding the configuration of your Intents and Sample Utterances to the code repository. The pattern is to create a speechAssets folder under the directory your Speechlet is in. The two files you’ll create are IntentSchema.json and SampleUtterances.txt. Here are examples of mine:

  "intents": [
      "intent": "CoffeeStatusIntent"
      "intent" : "ShameUserIntent"
CoffeeStatusIntent who took the last cup of coffee
CoffeeStatusIntent who took the last cup
CoffeeStatusIntent who was the last person to get coffee
CoffeeStatusIntent what jerk took the last cup
CoffeeStatusIntent what jerk took the last cup of coffee
ShameUserIntent to shame them
ShameUserIntent shame them

Having these in your source code makes them easier to edit, since you can just copy/paste them into correct fields in the Alexa Skill configuration. And having them under thumb also helps as a reference for developing your intents.

This takes care of the code. In my next post, I’ll cover how to deploy this to Amazon Lamba, and how to configure and text the Alexa skill.

A Resurgent Nikon

As a photographer, I’ve been a longtime Nikon user. Nikon always epitomized, for me, the ultimate real photographers. From my beautifully crafted FM3a to to the modern DSLRs, the had always been something magical about handling a Nikon. But over the past few years, Nikon has fallen into a rut. The short story is they stopped innovating.

Nikon behaved as if they were the only game in town. In a way, they were. Canon hasn’t done anything exciting since the 5D mark III, which is really long in the tooth. Nikon started to treat their own camera lines as their only competition. They started to intentionally crippling different camera segments to ensure they didn’t cannibalize on each other. Want a pro caliber body in DX format? No way. Even a FX sensor in a pro body without going to 36MP was not an option. And even within a range, improvements within new releases were minor. For example, the D600 -> D610 -> D750 series offered only minor, evolutionary enhancements, contrary to what their customers were demanding. Nikon became more interested in protecting model segmentation than doing the right thing for their customers.

It had nearly reached the point where I was prepared to divest of my Nikon gear. Over the past two years, there has been a lot of innovation happening in photography, but unfortunately for Nikon, it was happening with mirrorless cameras and not Nikon DSLRs. I’ve been really impressed with the Fuji XT-1 and the high quality lenses Fuji has been releasing. I feared Nikon was about to pull a Kodak and let themselves fail due to their own arrogance.

Sony has been out innovating Nikon at every turn with their new A7 cameras. They have high quality third-party lens manufacturers making lenses only for their camera. In short, Sony was not just moving Nikon’s cheese, they were stealing it outright.

Sony was also churning quickly, responding to the demands of photographers. They only waited a year between the A7R and A7RII, and the changes between them were huge. Sony was also engaging more with photographers. They even contacted me to ask if they could use one of my tweets for their social media marketing campaign.

Fortunately, rather than die the Kodak death, Nikon appears to have realized the challenges they face rather than sticking their head in the sand. Along with releasing their new flagship DSLR, the D5, the also dropped a camera no one was expecting: the D500.

The D500 is the camera a lot of Nikon users had been asking for: a true professional body APS-C (DX) sensor camera. I had used the D200 back before moving to full frame and loved it, but I felt back then APS-C was being left for dead. I don’t think Nikon would have made this camera without the competition from Sony. It is the first revolutionary camera they have released in years, and it shows that Nikon is back in the game.

I’m predicting Nikon will release a D900 later this year, which takes the same pro body of the D500 and upsizes it to hold the same full-frame sensor as the D5. This would be the no-compromise successor to the D700 that photographers have been screaming for, not another pro-sumer lightweight like the D600/D610/D750 chain. I love my D750, but would upgrade in a heartbeat to the D900.

I’m hoping Nikon has finally realizing that protecting the segmentation between their camera lines is less important than giving photographers what they want and protecting them from the huge threat of Sony. It remains to be seen, but at last I see a glimmer of hope. And I’ll be hanging onto my Nikon gear.

Surface Pro 4 Impressions


I’m a huge fan of the iPad. My iPad Air is my most used device, outside of my work laptop. I appreciate the tablet form factor in general, and have really only been disappointed by the iPads lack of ability to do some more of the general computing tasks I do.

I started getting interested in the Microsoft Surface Pro 3 earlier this year. It looked like an ideal compromise, but I refused to buy one because of the abomination that was Windows 8. By the time Windows 10 dropped, there were already rumblings of the Surface Pro 4, so I waited a bit longer. Finally, in the past couple weeks, we had two huge announcements: the iPad Pro and the Surface Pro 4.

Both looked like exciting devices. Both have screens about the size of a piece of letter-sized paper, which would be great for reading. But the Surface Pro 4 was especially interesting because it brings Windows 10 to the form factor. I use my Windows 10 desktop most the time at home over my aging MacBook Pro. It’s definitely a bit rough around the edges compared to OS X, but Microsoft is really churning on it and I’ve been very happy using it.

The Microsoft Store

While hitting the mall with my daughter this weekend, I dropped into the Microsoft Store and was surprised to see they had both the Surface Pro 4 tablet and a Surface Book to try. I stuck my daughter in front of an Xbox One with Forza, and proceeded to poke away at both them for a good 15 minutes. Since I couldn’t install anything on them, I used Google’s Octane 2 benchmark to get a comparison point for performance. Conveniently, the mall also has an Apple store, so I ran the same benchmark on a current MacBook Air and 13″ Retina MacBook Pro.

Surface Pro 4 Impressions

  1. The Surface Pro 4 is slightly heavier than the iPad Pro, but it is still a very manageable weight. I can see carrying it around easily or sitting on the sofa with it on my lap.
  2. Construction quality appears really good. The screen is bright. The kickstand is slick. And having a USB 3.0 and Mini Displayport connector makes hooking up to peripherals and an external monitor easy, which is something the iPad Pro can’t do.
  3. Precision of the pen is really good. It flows smoothly, and can do fine lines, including a signature, easily. It’s comfortable in the hand, and the “eraser” end feels natural. The only downside is it only attaches magnetically to the Surface Pro 4 on one side of the tablet, and in one direction. And you can’t just randomly attach it to the side. The pen needs to be turned to the correct side for the magnet to attach.
  4. Palm detection while using the stylus was a bit slow. Any time I put down my palm, it scrolled a little bit before it figured out what I wanted to do. Since this is software, I’m sure it is something Microsoft will churn on and get right.
  5. The keyboard smart cover is nice. I actually thought it was a bit too stiff, but the trackpad works well. I’m sure I could get used to the keys, though, and be productive with it. It attaches firmly and easily.
  6. The power connection isn’t up the the quality of Apple’s MagSafe connector. It doesn’t attach as firmly, and the cord comes off the connector to the side, not straight, so you have to put it in one way.
  7. Windows 10 felt snappy, and responded easily to touch.
  8. The Surface Pro 4 doesn’t have a cellular data option. I have this on the iPad, and would miss this on the Surface Pro, but it’s not a deal killer. It is a pretty big oversight on Microsoft’s part though. The always on, always ready internet is now.

Surface Book impressions

I played with the Surface Book too, but I actually preferred the Surface Pro 4. In tablet mode, it felt too large. The keyboard and trackpad felt really good, but it didn’t feel any more special than my MacBook Pro keyboard. In convertible mode (screen turned around, attached to base), it is too heavy for normal use. The mechanism for detaching the screen for tablet mode is slick, and worked well, but I suspect it’s going to be a point of failure.

The Surface Book has potential, but I found it too laptop-like. The Surface Pro 4 is an excellent tablet that can become a laptop replacement, while the Surface Book is a laptop that can become a tablet for a little bit.


Here’s the fun part — the Octane 2 scores:

System Score
Surface Pro 4, i5, 4GB, MS Edge 23,994
Surface Book, i5, 8GB, MS Edge 28,979
13″ MacBook Air, 1.6ghz i5, 4GB, Safari 21,036
13″ Retina MacBook Pro, 2.7ghz i5, 8GB, Safari 25,520
iPad Air, A7, Safari 7,153

The Surface Pro 4 sits closer to the Retina Macbook Pro than the Macbook Air. I suspect, once loaded up with the i7, it will be a direct competitor of Apple’s MacBook Pro and not the Macbook Air. I also added the iPad to try and infer what the iPad Pro will do. Even if the A9X is 2.5x faster than the A7, it still leaves it way behind even the base Surface Pro 4 with an i5 at essentially the same price point. The iPad Pro is going to be a tough sell going head to head with the Surface Pro 4.

The Surface Book is definitely a MacBook Pro killer. It’s going to put a lot of pressure on Apple for their next iteration of laptops.


The Surface Pro 4 is a big win, to the point I’ll be ordering one. It has all the media consumption capabilities of a large iPad, while also allowing me to click open the kickstand, slap a keyboard on, and do real work, all on the same device. I could see it becoming my primary computing device. With the pen, this is going to be an awesome tablet for editing photos. I can run real Photoshop, with my favorite plugins, and do high quality edits easily.

Microsoft has really knocked it out of the park with the Surface series, and the Surface Pro 4 in particular. As a Java developer, I’ve been pretty critical of Microsoft in the past. But Satya Nadella has really turned the company around. Microsoft is a technology company again, not just a bunch of sales people trying to suck every dollar possible out of windows and office sales to enterprises.

And great job to the Surface team for building such a truly stunning device. I look forward to seeing what else comes from you all in the future.

California with the Fuji XT-1

I spent the past week taking advantage of Spring Break to take a trip out to California with the family. We hadn’t been back to the Bay Area and Monterey for about two years, and it is one of our favorite places to visit.

For this trip, I decided to pack light and only take my Fuji XT-1 with the 23mm, 56mm and 14mm lenses. I ended up using the Fujinon XF 23mm f/1.4 lens most the time, but I did do some street photography with the Fujinon 14mm f/2.8 lens in San Francisco.

While I love the 56mm lens for portraits, the 23mm has proven to be my favorite general purpose lens. It handles itself well in most any circumstance, from portraits to travel shots. Here are some of the shots with this trio.

Statues at Cannery Row in Monterey
Statues at Cannery Row in Monterey
Chinatown in San Francisco
Chinatown in San Francisco
Fishermans Wharf in San Francisco
Fishermans Wharf in San Francisco
The Golden Gate Bridge
The Golden Gate Bridge
Drug Store Pirate along Cannery Row in Monterey
Drug Store Pirate along Cannery Row in Monterey
Fishermans Wharf in San Francisco
Fishermans Wharf in San Francisco

Fuji Fujinon XF 16-55mm f/2.8 R LM WR Lens Review

The new “pro” zoom from Fuji is finally starting to hit the streets. My favorite local camera store, Competitive Cameras, actually got two in last weekend. The first guy on the list drove halfway across the state of Texas to get his. I was number two.

I became a fan of Fuji when they released the XT-1 last year. It had the feel of the manual cameras I was used to from the days of film, but with a size and feature set of a modern digital camera. I also really loved the size and build quality of their prime lenses. The 23mm and 56mm are incredible lenses. Combined with the XT-1, they make for a potent street shooting and travel combination.

Which brings us to their new big brother, the Fujinon XF 16-55mm f/2.8 zoom lens. This definitely marks Fuji’s attempt at producing a pro-grade lens and the size reflects it. While I’m not selling any of my primes any time soon, I also know there is a time and place for a solid, fast short zoom.

The first reaction to mounting it on my XT-1 was “damn, this is a big lens”. This is actually pretty similar in size to the Nikon 24-120mm lens which I found to be a boat anchor after carrying it around all day on a Nikon D600.

I carry my XT-1 cross-body on a DSPTCH Standard Sling. The 16-55mm definitely sticks out further than the primes, but I didn’t find to too unbalancing, so I decided to take the chance and bring the lens home.

Here’s the 16-55mm compared to the 23mm and 56mm. Yep, it is that big, with a wide 77mm filter size to boot.


And here it is hanging off the front of the XT-1. The vertical grip would probably improve the balance, but I don’t find it too bad on the XT-1. I definitely don’t see this as a lens to use on any of the other smaller Fuji bodies.


The first weekend, I shot a kids birthday party to get used to the lens. It was crappy weather, so lots of high-ISO indoor shooting or flash shots with a Nissin i40. We finally had some nice weather this weekend, so I trucked the 16-55mm and family off to one of our favorite local parks for some shooting.

I’m usually a RAW shooter, but to stay consistent, all the images for the review are jpegs straight from the camera, shot in Standard profile with +1 color. Clicking an image will bring up the full sized files for pixel peepers.

First up is the obligatory wall shot to see how the lens handles in the corners. Since you’re buying this lens to shoot at f/2.8, I’ll only post those versions. I also took a set at f/8 that are extremely sharp across the frame. All shots are at ISO 200.

16mm at f/2.8
16mm at f/2.8
23mm at f/2.8
23mm at f/2.8
35mm at f/2.8
35mm at f/2.8
55mm at f/2.8
55mm at f/2.8

Center sharpness at all the focal lengths is incredible. Wide open at 16mm we get a bit of curvature and the corners are a bit soft. At 23mm and 35mm, I see just a touch of softness at the outer edges, and 85mm is essentially perfect from edge to edge. I don’t see any vignetting at any of the focal lengths, but I’ll have to try against a white background.

Overall, this is an insanely sharp, high performance lens that holds its own against the f/2.8 offerings from the big dogs. Here are a few more shots from out at the park. The small versions are a bit distorted, so click to see a visually correct version.






After carrying this lens around for afternoon, I would call this lens a keeper. Like any f/2.8 zoom, you’re really buying it to use at that aperture, and it doesn’t disappoint.

The focus on this lens is extremely fast and quiet. The aperture ring has very solid detentes, more so than any of my primes, so no accidentally knocking the aperture loose. The zoom ring is just stiff enough. It is easy to turn, but the lens won’t start to creep out while hanging around your next. This was a major complaint of mine with the Nikon 24-120mm. I also like the bokeh I got from this lens.

For the bad side, this lens pretty much kills the typical stealth mode of the XT-1 with a prime. This bad boy sticks out from your body. And I’m not a fan of the lens hood. Due to how I carry the camera, I was catching my hand on the irregular edge while walking before I finally found a comfortable angle to carry it at.

So should you buy it? You really can’t go wrong with this lens assuming you can stomach the size and price. If you’re comfortable with the primes and going lean and mean, this isn’t your lens. I like the versatility this fast, sharp lens brings to the game and it will definitely be a travel companion going forwards. In fact, this lens along with its big brother, the Fujinon XF 50-140mm f/2.8 lens, are probably all some people will need.

(Note: this review was originally posted on another site I was experimenting with. I didn’t like the experiment, so I moved the review over here)