Reporting Cross-sector/Interagency Data

Ask eAIR invites questions from AIR members about the work of institutional research, careers in the field, and other broad topics that resonate with a large cross-section of readers. If you are interested in writing an eAIR article, or have an interesting topic, please contact eAIR@airweb.org.  

This month’s question is answered by Jason Pontius, Associate Chief Academic Officer, Iowa Board of Regents.

The ideas, opinions, and perspectives expressed are those of the authors, and not necessarily of AIR. Subscribers are invited to join the discussion by commenting at the end of the article.

Jason_Pontius.jpgDear Jason: Most states have built or are working on some type of statewide longitudinal data system (SLDS). In your experience in Iowa, what are some of the benefits and challenges in reporting on cross-sector/interagency data?

The ability to use cross-sector data to follow large numbers of students throughout their educational path and beyond can be extremely powerful. For example, in Iowa we have used our SLDS system to, for the first time, understand how many Iowa high school graduates go to college, how many struggle with remedial courses and retention in college, and how many eventually earn certificates and degrees. We have explored factors that predict future success in college and conducted research on how well the state of Iowa prepares high school students for college level math.

Coming up with benefits is the easy part. When we began our SLDS grant in Iowa, we had little trouble thinking of interesting (and hopefully useful) things we could do with good longitudinal data. In retrospect, I think our excitement about the sheer potential of the data created some of our biggest challenges.

As institutional research professionals and data analysts, we were in a hurry to combine data so we could use the system to test hypotheses and unravel new patterns in student behavior. At one of our first steering committee meetings, we discussed some of these amazing possibilities and, instead of sharing our excitement, data stewards on the committee became concerned. If I recall correctly, there was frequent use of the phrase “putting the cart before the horse.” Agencies were concerned because they had yet to agree on any shared data elements; they were worried that their data could be misused, misconstrued, or misunderstood; and they had a real concern that as responsible stewards of student data they would not only lose control of who accessed their data, but also lose the ability to provide appropriate context for those data. In short, we needed trust first.

Trust takes time

I believe that building trust was the single biggest challenge we had in Iowa. Our SLDS project brought together multiple agencies and asked them to share data voluntarily. Agencies had agreed in principal because they saw the potential of an SLDS, but the details were all important. In the end, it took about a year of countless meetings and lowered expectations before we started to share real data. How did we get there? To quote Bill Murray in “What About Bob,” we took a series of “baby steps.” Here is a sample of some of those steps:

Minimal central data sharing

Data stewards were unwilling to participate in a centralized SLDS. There was strong resistance for the idea of “throwing everything into a pot and hoping for the best.” Instead, we settled on a federated model with “just enough” data sharing. This meant that cross-agency data would never be stored centrally unless it was for a specific deliverable (e.g., Postsecondary Readiness Reports) and only data necessary to generate that deliverable would be shared. This meant abandoning some early “big sky” data exploration ideas, but it kept the project focused and more manageable.

Start small and build slowly

The initial data dictionary included minimal K12 data combined with largely directory-level data from the colleges and universities. Directory-level data were low-risk because they were essentially public data and already available through the National Student Clearinghouse (NSC). Before we shared real data, we used dummy data to test our data transmission and storage systems. When we did share real data, we started with a limited number of data elements for a small sample of school districts. This process often seemed painfully slow, but while we tested our systems, validated data, and built prototype reports, we also built trust with our participating agencies.

Veto power among data providers

Any Iowa SLDS agency contributing data for a specific deliverable can pull their data at any time. Since Iowa SLDS started in 2012, no agency has exercised its veto power. I believe, however, that agencies feel more secure in sharing data knowing they can walk away from the SLDS. The rule helps ensure that we make decisions by consensus and maintain high levels of communication across agencies.

Constant communication

Our SLDS project team has met weekly since 2012, even though our federal grant ran out in 2016. I believe our weekly meetings have been perhaps our single best mechanism for building and maintaining trust.

Document everything

Documentation is wonderfully dull and time-consuming, but important to building trust and maintaining systems. If a data steward has questions about how the SLDS uses their data, they have access to the full business logic and data extraction syntax. All SLDS documentation is stored on a shared folder that anyone can access at any time. This documentation has also become a valuable part of institutional memory as members of the SLDS team retired or left for other jobs.

In summary, there’s a rule of thumb that when you analyze data, you must first spend 80-90% of your time cleaning the data. In my experience with cross-sector SLDS data sharing, you must first spend 80-90% of your time building trust.

 
 

 Comments

 
To add a comment, Sign In
There are no comments.