It’d be pretty hard right now to show someone an Internet of Things if you see the Internet as a vast interconnected system of flowing information. I used to get my grad students excited with stuff like “We’re going to put HEARTS on the Internet!” — as in, the heart muscle via some embedded connected device. Heart data, in all of its various forms individually and aggregated across populations could have almost infinite utility if it could be plucked out of the air on demand. But alas, hearts will never quite be “on the Internet” — and by that I mean they won’t be interconnected, findable, addressable entities and you won’t be talking to them.

What you can increasingly find are devices attached via some network lifeline to an “IoT platform” (there are many, take your pick). These platforms, while providing the necessary systems that do the gruntwork of provisioning, billing, hardware management, et al, have increasingly become the gatekeepers, and they are far more interested in getting data in than letting data out.

So, the “hearts” of the Internet are really fitbits, which are prisoners of Fitbit Inc. inside Fitbit.com. Bajillions of Fitbits… floating around inside Lake Fitbit. And the EKG device is over in another lake and the running shoe device is in another lake, and they’ve all got mountains of protocols, formats, and security between them. It’s a really a shame — especially since all of the really cool applications live in those in-between spaces.

The savvy IoT developer of April 2016 can’t let this Internet of Lakes become a bunch of swamps. In 2015 it may have looked like the IFTTT’s of the world (again, there are many, take your pick) could be an answer to this problem, but we’re learning now that these services are just neatly decorated swamps themselves (see “My Heroic and Lazy Stand Against IFTTT“. Developers today, whether they are managing data-generating devices and/or applications that pull data from many different places, need to be equipped to provide the “Internet of Things experience” for their users directly. I mean, every application of value has to just work. If you don’t have the connections a user is asking you for today, you’d better have it by tomorrow, and you can’t depend on someone else to do it for you.

I like to think the future holds a very elegant solution to this problem, but TODAY, a lot of times, the answer is brute force hackery. And brute force hackery can also be viably married to an agile backend architecture by embracing the containerized, decentralized, swarms of microservices model that is all the rage right now. That means no more servers and no more devops and leveraging services where scale, deployment, and configuration aren’t roadblocks. At Scriptr we’ve taken a huge step in this direction with our sort of “instant Javascript API” pattern, but we’re also looking ahead toward even more streamlined methodologies.

So, let’s recognize that Big Data “Swamps” are already here in abundance. The bigger wins and innovations moving forward are going to be in the spaces between them.