We are long past the point where we need to formalize a dedicated data tier that decouples a building's automation or management system from the downstream applications that rely on its data.
It’s a risk-free first step on the journey to a smart building: unlock and model the data that’s currently locked away in proprietary and siloed systems.
It creates a single source of truth by enforcing one data model (e.g. Project Haystack or Brick Schema) for all applications and promotes interoperability.
It reduces dependence on one vendor and promotes a cooperative ecosystem. Depending on the building owner’s needs, it may be most beneficial to select multiple vendors to fulfill all the capabilities desired. If the data layer platform is designed as such, it could start to look like an app store for the building.
Similarly, it de-risks the investment by allowing the owner to trial, test, and compare multiple smart building applications without needing to restart the costly integration from scratch.
However, James continues by challenging these assertions on the premise they won't work in practice. If we walk through each point though, I believe they actually strengthen the case for a data layer.
Lets look at each one:
De-risking strategies like this perpetuate the myth that these technologies aren’t quite ready for primetime.
This could be true for any industry. We are constantly evaluating our tools and improving them because there are unmet needs. What if we'd been satisfied with Blackberry as the solution to mobile computing — and the iPhone was never invented? Challenging the status quo is how we drive our industry forward.
It might actually increase risk for the owner by adding complexity, increasing the timeline, delaying results...
A data layer specifically designed for BAS applications will have to be easy and cost effective to be successful. The core competency of this layer is data collection, storage, API integration, reliability, and security. Focusing on only these pieces implicitly allows them to do a better job than “full stack” solutions, which must balance substantially more functionality.
By simplifying data collection using better tools, and getting legacy and disparate protocols pulled up to a single, high level, consistent IP interface, we actually reduce complexity for downstream commissioning. And with increased reliability from the data layer, we also reduce long term maintenance costs on those systems.
If you don’t understand and plan for the applications that will use the data, you’ll struggle to model it appropriately … Complex applications like FDD are not an undifferentiated commodity
It's important to separate the notion of logical structure from the physical data. While we’ve made great advances in the space of semantic tagging with standards such as Project Haystack and Brick — there is still work to be done. Modeling must exist in a layer above the actual data storage in order for the data to be future proof, portable, and reusable.
As machine learning techniques advance, I believe we'll continue to see modeling automated by software, and become an expectation not a feature. Additionally, tagging is generally more effective further up the stack, where the use cases are more specific and the application often has more context to apply semantics accurately.
Just because your data is in a full-stack software, doesn’t mean it’s not open and usable.
This is true — and in some cases this is the best solution. But saying your data is exportable is a bit misleading. For one, it's highly unlikely this is a core competency of any full-stack solution — so the range of simplicity on this varies greatly. Could you build an entirely new application off the export feature of an existing product? Just because it's possible doesn't mean it's practical.
Another major concern is who's responsible for managing that data? If you're using a self-hosted solution — it's your problem — making sure it's connected, backed up, and secure. In the age of cloud services — this isn't something BAS companies should be wasting time and money on. Who manages their own email servers?
In conclusion — consider the possibilities for buildings if any application can be installed and immediately has a vast pool of data to operate from. This could be a new flagship product you want to evaluate. Or it could be a Jupyter Notebook you’re using for exploration. The flexibility of easy open data access unlocks enormous potential for the future of smart buildings.