As environmental data management professionals, we typically serve any of three project scales:
1) Multi-Facility Enterprise Projects;
2) Large-scale single facility projects such as Superfund (Super Fun?) projects; and
3) Small-scale projects such as Phase II investigations or gas station-sized projects requiring regular monitoring of a few wells.
Most data management systems help us do a great job serving the Enterprise and large-scale projects. But I’ve noticed that in general, it’s tough to serve the small projects efficiently. I often feel that the overhead required to set up a database and generate data summary tables requires more labor than is justified. Hold it- sit down… of course I’m generalizing, and there are exceptions.
For example, I worked in an office where our whole data management system evolved to serve several large Superfund and military base projects where we efficiently generated electronic COCs, tracked samples, loaded EDDs, and qualified hundreds of samples per event. All the field techs, project managers, and chemists, as well as the 7 or 8 labs we regularly worked with, were dialed in to our “system.” A well-established data workflow defined our office culture and consequently, made it easy to start up projects of any size. Once the project was set up, we could quickly generate well formatted report-ready data tables (like the one below) and maps making data management appear effortless. Hmmm… am I dreaming?
Conversely, many offices without a data management culture feature work team silos with widely ranging regulatory or client needs. Unfortunately, these small projects are much harder to support efficiently. The teams (rugged individualists maybe?) likely use very specific Excel data summary tables and are more efficient using a hands-on approach to manage their data- their hands I mean, not data managers. They literally add data by hand to existing Excel templates or workbooks. Yeah- they have to QC every value they hand-enter, but don’t you know they’re going to do that to every database report you propose to generate anyway!
I believe these folks get better value with their approach than if they had to wait around for EDDs to get fixed and loaded, then get tables made (with the correct screening levels), followed by grinding through several iterations to the tables the way they or the client want them. Using the DIY approach, they gain hands-on control of their work. They don’t have to go through a data specialist unfamiliar with their specific needs, and who is not always immediately available as data needs/revisions arise in the 11th hour. In other words, as hard as it is for us data managers to swallow, small teams managing their data by hand literally can give their project better quality and less “friction.”
Of course, I wouldn’t have written this post if I didn’t think there was a better way.
So, how can we help these small teams reduce labor and increase quality? The key is developing a simple set of tools allowing small project teams to manage their data themselves without specialized training. I’m imagining a system where small projects can reap the benefits of querying and reporting well-formatted tables and charts without the inherent complexity of a full-on data management system.
What would those tools look like? I’ll pitch some ideas in my next post.






