Even increased efficiency can create new problems. With more efficiency comes the thought to collect more data.

–Data Analyst Brian Degon

Life saver in the ocean

Swimming in data

AI goes fast. And because of this it amplifies the challenges to data security. For one example I saw on the way to posting this interview: in a joint Boston-area-school study of the recent surge in OpenClaw – an autonomous agent intended to take over a machine and work on behalf of the machine’s owner – the variety of security troubles the agent got in surprised even the researchers who were anticipating some problems.

And this brings us to the fundamentals, that brings us to my talk with Masters Academy International (Stow, Mass) data analyst and Bryant University adjunct professor Brian Degon. Previously Brian spent twenty-three years as a data and process analyst for WPI in Worcester, Mass.

What Brian has had to address will be recognizable to any manager looking for clarity about handling the risks of stored data and getting buy-in for tech initiatives. He argues that tool- and process-adoption start with understanding how work actually gets done and that data collection initiatives should begin with asking the right questions.

The complete discussion, including more about Brian’s experiences at WPI, is here.


What can you tell me about inside operations – specifically about employee adoption of new internal tools and processes?

My focus the past twenty-five years has been to understand internal users: what are their processes? their needs and wants? … My approach isn’t showing people a system and saying ‘figure out how you’re going to use it.’ It was talking to users to really understand what the business challenges are, what processes and practices they have in place, while looking at where tasks are repetitive or redundant to try and reduce waste.

Unless you understand from the end users what kind of problems they’re dealing with on a regular basis, you’re not going to know which data to give them or how to filter it or highlight it to meet their needs. And right now, with all these AI tools … without a strategy behind adoption, [you won’t know] will this eventually be saving us time or will this just be something else that could be problematic? With more efficiency comes the thought to collect more data. And more data means more risk.

How do you decide what data to collect and hold onto?

It comes from understanding what you’re going to collect, why you’re going to collect it, and how you’re going to use it … Things like GDPR and Mass Privacy have put some consumer advocacy in place, so that organizations can’t just collect the data and then have free rein over it forever. The frameworks to be in compliance are helpful to identify where an organization’s risks are and where things can be improved.

It’s critically important to identify and label your sensitive data. Organizations can define what data they consider to be sensitive, and put expert controls on them. They can encrypt data at rest, so if someone does get physical access to a storage location or device, there would be no use for them to take that data – they wouldn’t be able to read it or use it.

When you’re done using [data], what’s the value of keeping it versus the risk of it being compromised? A policy without a practice is just a piece of paper – and the more clearly you define your retention policies, the easier you can put them into practice.

[At WPI] we had a general framework for assessing the value of data: where is the data? why is it there? do we still need it? … A lot of it was data that had originally been retained for business continuity and disaster recovery. You don’t need data from twenty years ago to continue operations. And you certainly don’t need backed-up copies of it.

If you don’t have a need to collect that data … then you’re putting yourself at unnecessary risk.

How would an organization know if its data decisions are working?

You want to make sure that you’ve decided how you’re going to measure success … I’ve done some work with predictive AI tools. Not generative AI, but more what I would call analytical AI. These tools are very effective at helping humans understand the relationship between inputs and outputs … If the analysis shows that any one of these [variables] doesn’t have an impact, you might as well stop collecting that. If your collected data doesn’t serve the purpose you intended, you’re best off, from a security standpoint, to stop collecting it and get rid of it.


Bryley’s staff is here to help guide you in maintaining uptime and getting buy-in as you deploy or explore integrating AI in your workflow. To speak to Bryley’s Roy Pacitto please complete the form, below. Or you can email Roy at RPacitto@Bryley.com or reach him by phone at 978.562.6077 x217.

Connect with a Bryley IT expert about AI integration