I saw the article and came away thinking “meh”. Where’s the news here? About every ten years or so someone invents a new name for the same old thing. What was once called “ubiquitous computing” became “ambient computing” became “Internet of Things”. I see nothing wrong with a move from using “Smart Home.” As hinted at in the article, if you only stick to commercial offerings it’s hard to call anything you can do with this stuff “smart”.
Apple less so, but all three are very active participants in Open Source. But, they have the same motivations that anyone has to contribute to an FOSS project which is “I’ll contribute by adding something to the project that I need.” And Google in particular is an originator of many many many FOSS projects.
They may not be interested in helping us because we don’t have anything they need and as with any company there is a strong “not invented here” attitude. But it is incorrect to say that these companies do not contribute to FOSS projects. They do so in a self interested manner, but so do the rest of us.
Standards are a double edged sword though. Once a standard is set, it becomes relatively fixed forever. This means that new and novel use cases may become impossible as they are outside the standard.
And one reason why we don’t have a standard in this area is that:
- the requirements are diverse making a single standard unlikely to be sufficient
- no one has developed a compelling standard that has some sort of advantage over the others (note that an advantage need not be technical superiority, e.g. VHS versus Betamax)
- there is that thorny problem of remote access.
Also, make no mistake, there are tons of standards in this realm that are being used.
- TCP/UDP/IP
- 802.11
- Zwave
- Zigbee
- OAuth
- HTTP(S)
- TLS/SSL
- USB/Serial/etc
- UPnP
- Multicast
- MQTT
I can go on. The problem isn’t that we lack standards. We couldn’t integration anything into openHAB if there were not standards upon which all this stuff is built. The problem is the standards in use are often not compatible meaning we have to have something like openHAB to bridge between them. I do not see this changing in my lifetime.
Every decision made by a vendor is based on this sort of calculation. You can’t blame them for looking at the market and deciding that the added cost to support 5GHz (for example) would be justified by increased sales. If the market overall doesn’t care enough than it’s not a good business decision to pursue it. It might be inconvenient for you or for me but we are not the majority of the market. We wouldn’t drive up the sales enough to make it worth their while.
OK, let’s pretend I’m a company. I want to stay in business. To stay in business, I have to make more money on the services and products I sell than it costs for me to provide them. *There currently exists no single standard in this ecosystem for my company to adopt." So I have three choices, two of which are really the same choice:
-
Adopt someone else’s standard, but whose? Which one has enough users that it would increase the sales of my product? Is there one? I don’t know what it is if there is.
-
Build my own “standard” and publish it as one more among the dozens of existing standards.
-
Build my own and keep it my own.
Given that 1 really isn’t an option, though Google, Microsoft, Mozilla, et. al. are all trying to build their own “one standard to rule them all” but none of them have much broad adoption, we are left with 2 or 3. And from a cost perspective, 2 costs significantly more to do than 3.
Given that the vast majority of customers who will be buying my product don’t care at all if I’m doing 1, 2, or 3 it won’t drive any of my sales, what choice do I have? 3 makes the most business sense.
There are strong technical reasons why most of these companies rely on a cloud service and no, it’s not just to gather all your data. It is the only viable way to provide the ability to the average joe user the ability to access their devices remotely.
And if you go back to thinking like a business that wants to stay in business, one must ask:
-
Does it make sense to provide an API at all? Will the ability for advanced users to integrate with their products drive sales enough to cover the initial and ongoing costs?
-
If it makes sense to provide the API, does it make sense to provide two APIs? We already need the cloud based service to allow remote access. We already have to build an API for that cloud based service so our apps can communicate with the devices. So do I just publish that API and make it usable and call it done, or should I go through the effort to build yet another API that allows for local access?
-
What risk do I face by opening up an API, either through the cloud service or locally? What if I’m hacked? What if I do it poorly and user’s data is exposed? Providing access to the data and abilities of an IoT system does not come risk free. All it takes is one bug or inept configuration of a database and suddenly you become the latest threat to privacy and civilization.
If you think about it from a business perspective, it’s no wonder that when there is any API at all, it’s through a cloud service. Those of us who are home automation enthusiasts are not a large enough of a market to change the answers to those questions much.
And that is one of the big issues with the cloud approaches. The vendor cannot and will not guarantee the service. Just look at the Google Graveyard for the recent news from BestBuy.
In my opinion, cloud based smart devices are way overpriced. And I totally agree, if you have the skills and desire, you should stick to stuff with a local API. But we are not a big enough market so we will always be under served in this regard. As discussed above, the business side of things will drive companies towards the cloud service API.