Data bloat will continue to be a problem for the enterprise. For example, in IT environments where metadata is stored and does not get deleted, data stores continue to grow with no end in sight. We will see solutions begin to emerge in the year ahead that will attempt to identify key indicators or markers of this problem similar to how doctors look for early symptoms of disease.
The goal will be to create technologies that can avoid data degradation and bloat in storage environments, and this is the year when we will see formative steps toward that future. Even in the early stages of this, we will begin to squeeze the time out of processes like information transfer, editing and review—inordinate time is wasted moving data around so we can look at it and review it and take it back and change it. A new mindset toward data will start to take hold as a result.
Data management will become more demand driven in the year ahead. This will require a change in mindset, away from thinking about data as a monolith, and more about what type of data teams need. What are they using the most and how can IT leaders provision the transfer of that data? Understanding the demand for data, not just how many users are supported or what capacity is needed, will be the hallmark of a new demand-driven model that will begin to drive both services, as well as pricing.
As data becomes more demand driven over the course of the next year, we will see machine learning incorporated into data environments. This will mark the beginning of an exciting shift toward self-healing data management and storage that will in many ways look a lot more like IoT with intrinsic, continual monitoring of the state of data itself. For instance, we will see the beginning of new technologies that can observe data deployments and identify potential problems as they develop, and then fix them automatically.