Editor’s Note: On April 21, 2021, HaystackID shared an educational webcast designed to inform and update legal and data discovery professionals on how organizations can manage public comment responses for environmental impact projects, rule changes, and permit applications using the latest technology with comprehensive workflows to meet strict deadlines and detailed reporting.
While the full recorded presentation is available for on-demand viewing, provided for your convenience is a transcript of the presentation as well as a copy (PDF) of the presentation slides.
[Webcast Transcript] Optimizing Public Comment Management in Environmental Impact Projects
Managing public comment responses for environmental impact projects, rule changes, and permit applications requires the latest technology with comprehensive workflows to meet strict deadlines and detailed reporting.
In this presentation, eDiscovery experts shared how to streamline and optimize the labor-intensive process of accepting and resolving public feedback using Relativity coupled with innovative workflows.
+ Review team considerations for public comment reviews.
+ Collection and processing of public comments.
+ Using eDiscovery technology to rapidly and automatically process and categorize public comments in the form of emails, letters, public hearing transcripts, hand-written comment cards, and social media comments.
+ Organizing public comments by name, date, topic and sub-topic, location, and project-specific criteria.
+ Building detailed first draft environmental reports that track and associate each comment to the author.
+ Tara Bellion – Ms. Bellion is a certified Project Manager with AECOM with more than 26 years of experience as a National Environmental Policy Act (NEPA impact) assessment practitioner. (Tara.Bellion@AECOM.com – 907-750-6286)
+ Albert Barsocchini, JD, LLM, CEDS – As a Global Advisory Consultant with HaystackID, Mr. Barsocchini has more than 25 years of global legal and technology experience in discovery, digital investigations, and compliance.
+ Cameron Tschannen, Esq. – Mr. Tschannen serves as Director of Review at HaystackID. In this role, he is responsible for managing the review of electronic data at the direction or in consultation with clients and outside counsel’s legal teams.
+ Abigail Donohoo, JD – Ms. Donohoo serves as Review Manager for HaystackID. She plans, deploys, and manages client review projects in cooperation with the Analytics and Review team in this role.
Thank you all for joining us today. [Today HaystackID shares] on the topic of Optimizing Public Comment Management and Environmental Impact Projects.
Good morning. This is Tara Bellion. I am with AECOM. We are globally one of the world’s largest engineering, environmental, and construction management companies, and had the pleasure of several years ago of working with Albert and his team in developing a relationship with them, particularly with our needs regarding the National Environmental Policy Act, and that’s how we came to be in partnering with each other on projects, and here to present with our team today. Thank you.
My name is Cameron Tschannen, I’m a Director of Review for HaystackID. I’ve worked in document review for many, many years, including in review management for the last six-plus years. Working with me the last few years has been Abby Donohoo. [Good Morning] Abby.
Hi, my name is Abby Donohoo. I am a review manager with HaystackID. I’ve been with the company for about two and a half years, and in my capacity, I work with our recruiting team to staff reviews and oversee review workflows, and handle communications with clients about review progress and developing a common substantive understanding of various protocols.
Thank you, Abby, and with that, we’ll go into the agenda for today. So, we’re going to introduce you to the project itself, then talk about selecting the right team and technology for the project, team considerations and requirements, collecting and tracking the public comments, comment analysis and collaboration, documenting the process for not only internal quality control, but also for the NEPA process itself, organizing the public comments, building the first draft reports, and best practices.
Thanks, Cameron. So, one of our main reasons for partnering with HaystackID, and Albert [Barsocchini, Global Advisory Consultant] and Cameron and Abby, was a project that we had that was an environmental impact statement, as required under the National Environmental Policy Act of 1969, we call that NEPA for short in environmental consulting, and this is just a quick NEPA 101 for you, for those of you who aren’t familiar with working with NEPA. And NEPA grew out of environmental concerns in the movements of the late 1960s, and what it really did was to go and look at how to understand changes in the ecological environment and natural resources that were important in the nation at the time, and it established a council on environmental quality, which the US Environmental Protection Agency actually became the steward of, and early objectives for NEPA that it would supplement existing authority of federal agencies, and it was really to reform agency procedures to look at the consequences of decisions that were being made, particularly putting environmental concerns on an equal footing with technical, social, and economic concerns. It was also really important that it fostered intergovernmental coordination and cooperation between federal agencies, which is something that, as NEPA progressed, we really saw happen. More recently, in NEPA, we saw that with joint record of decision making, one federal decision under the previous administration, and the main goal of NEPA was really to enhance public participation in government planning and decision making, and where that came key was that it really brought the public into the decision-making process for programs and plans that the federal government had to make decisions on.
So, some examples of that would be an oil and gas lease sale, for federal projects, it could be a visitor center at National Park and Preserve, projects that require federal funding, railroad projects, FERC permits, Department of Transportation projects, and then projects that require federal approvals and permits by an applicant. So, by that I mean in oil and gas, telecommunication infrastructure, wind, tidal. Anything that requires a federal permit usually triggers NEPA. I live in Alaska where we like to say that you can’t throw a rock without hitting a federal property and federal lands, and ergo, you’ve triggered NEPA, and so where the focus really becomes is when you have a federal action to make a decision, you trigger NEPA, and you can do that in either environmental assessment being required, which usually leads you to a finding of no significant impact, or an environmental impact statement, and the focus of, really, the work that I do is in environmental impact statement and comment management and public involvement, particularly complex controversial projects that are generating a really high volume of public comment.
As the internet grew, it really grew how we needed to respond and manage public comment as well, particularly because of legal challenges that we saw that were coming out of how public was making comment, and because we were dealing with such a large volume of public comment, we weren’t able to identify and see trends where there were alternatives suggested in public comment. There was an omission of analysis of a public concern, and that was where it was opening up a project that may have been permitted to legal challenge.
So, we teamed with the team from HaystackID to really bring them on board to our team to help us manage comment volume, and where we see in the NEPA process two really key stages where we get a high volume of public comment is during public scoping projects on an EIS, and when we are at a public hearing phase of the project after a draft environmental impact statement has gone out to federal agencies and the general public for review. So, with that, I’m going to end my NEPA 101 there, and we’re probably ready to move on to the next slide.
So, early in the project, particularly when we are planning a project, we’re proposing to do some work for a federal agency, we try and right size at that point where we are and what to expect and volume of comments, and that’s where it becomes really key to selecting the right technology to manage the public comment. And Cameron, do you want to add some here?
Yes, absolutely, and so obviously, when Tara and AECOM came to us, as she mentioned, she’s worried about the large volume of comments that they’re going to receive, and she needed a partner to help work through the large volume of comments and really determine what’s required from an expert review perspective and what can be… what technology can do to lessen the demand or the load of those experts, and so that’s where HaystackID with their experience in using structured analytics, particularly textual near-duplicate identification, and then also managing document reviews was able to help. So, with Tara’s introduction into what the requirements were for NEPA, we were able to build out a process that was centered around a technology that we’re used to using, and basically an eDiscovery tool. And so, with that eDiscovery tool in place, we also need to consider the correct considerations for the collections of the comments, how those comments are going to be reviewed by the review team, and what the technical staff can do to lessen that burden on the review team.
So, as Tara mentioned, the large volume of data really becomes a sticking point for projects like this, and so that’s where the use of structural analytics, in particular the identification of textual near-duplicate groups, to identify what documents or what submissions are we seeing that are essentially the same, so there’s no additional substantive comment or comment of any nature, such that one document or one exemplar from each group can be reviewed and then basically incorporated into the analysis and still solve the requirements of NEPA. So, that’s where the eDiscovery tools offer particular useful insights and capabilities in order to reduce the volume of data requiring review.
Another consideration for the review team and for the collection team is how that data is analyzed and interpreted. So, the tools that we use in eDiscovery are very good with electronic communications, in terms of analyzing the text they’re in, but we also have to remember that part of this process includes handwritten comments, and so those handwritten comments need to be scanned, and then OCRed, so that the tool, the system, can identify are those comments are the same comments that are coming in, meaning are they a part of a form letter campaign, or are they unique, substantive comments that are going to require analysis, and ultimately some sort of response in the NEPA process? So, it’s very important for the technical staff to either (a) have some sort of substantive knowledge in the matter so they can help identify where documents are going to require additional review, or (b) just to understand their limitations so that after the technical near-duplicate analysis is complete, they can hand it off to review team that really understands the project, and that’s where starting with a small review team and increasing as needed is important. Abby?
Yes, we started with, I believe, just two reviewers and dropped down to one, and that worked very well with the flow of documents, because initially, we weren’t getting a large volume, but as it drew closer to the deadline, the volume of comments increased, and then we were able to ramp up the team size as needed, but we always had a couple of reviewers who had a decent amount of experience on the project, and solid understanding of what the client was looking for. And I know we’re going to talk about this in more depth later, but we had regular meetings with the client all through the process to make sure that our team was on the same page and understanding how documents needed to be categorized.
And that’s where team requirements become important.
Right, so kicking off the project, we had not really worked together previously. We’d had a small project under our belt together between the two firms, and for this project, what we did was… the first thing we did was, several years ago, when you could still sit down in person, and went out and just met each other, really went over what our needs were going to be, how we could work together and collaborate, and work out how we were going to move data between the two companies, and that became really key into setting up getting to know each other, establishing right off the bat protocols for how we’ve moved data became really key. Then also, we did some training on the system, so that we could really finetune to see how it was going to work and give feedback to each other before we started to really receive the bulk of the comment as well, and that became really key in learning how to work with each other as well with regards to communication and expectations. Cameron?
And that sort of process falls in line really with what document review is. So, for those familiar with document review, law firms and corporations during litigation will seek out firms on occasion to help assist with large volumes of data that require eyes-on review, and so working with Tara and the AECOM team, they are able to educate our review team to make sure that we were accurately reviewing comments so that the NEPA process and the report that’s ultimately going to be filed responds to the substantive comments of reviewers – substantive comments of the commenters, excuse me. And so going to review, the bulk of the communications or the bulk of the comments that were received were form letters, so form letter campaigns where the substance is largely the same, but you have to make sure that is this a document that is simply a form letter, meaning all of the content is the same, or has the individual submitter added some additional substantive comment that requires review and response, and so that’s where it’s very important to have someone who’s educated in what is and is not a substantive comment when it comes to determining what form letters require a response, let’s say, in bulk, meaning one response or basically one review of a representative sample or an exemplar from that particular form letter, or are there instances where individual commenters or individuals add additional content that’s going to require review at a more document-by-document level. And of course, communication and the interplay between AECOM and HaystackID to make sure that we are capturing all substantive comments and that we are categorizing them correctly is very important to that process and is similar to the process that is followed in most of all document reviews.
Yes, that’s correct, and we actually had a two-fold iterative process. We both met with AECOM, the team members, to talk through individual documents, and to make sure that our team was on the same page. We had fairly regular meetings, not quite weekly, but pretty close to that, to go through exemplary documents and talk through the reasons why a particular document was substantive or not, and those determinations required developing a really nuanced approach by the team. So, that communication with the AECOM staff and our review team was really important, and then in addition, we also maintained a Q&A log that had questions that we had elevated to the AECOM team through the course of the review, and that’s something that we typically do with almost any review.
And once the team is set up, you really need to figure out how to collect and track the comments that you are receiving.
That’s right, Cameron. So, from the NEPA side of the house, during the two stages of the process, during the scoping process, and then also just when we are out talking to the public, and cooperating agencies, and with federal agencies during public meetings, after we’ve issued a draft environmental impact statement, our public involvement program is riding right along the same parallel track of collecting comments at the same time. My team is doing that out in communities, and one of the challenges we have is we start to receive comments, we get comment forms at meetings, we get handwritten comments, we get comment coming to us from a court reporter at a hearing, and sometimes we get them on voicemail, and so our challenge is that we’re getting them in multiple formats. They’re all feeding into us, eventually, electronically. So, we’re moving that to HaystackID at this point, and from there, they’re taking it into their system. We also are out collecting comments in sometimes very rural communities, so we have to work with a broadband issue. So, we’re taking banks of laptops in as well to communities, moving scripts off those laptops then to Cameron and Abby’s team as well, but what becomes really key for us, as Cameron alluded to, is trying to get that substantive comment. Because everyone walks around with a computer in their hand at this point, it’s weeding out what is substantive versus non-substantive comments, that become really key. We do a lot of education for that when we’re out meeting with people and listening to the public. However, what we see a lot of, too, is the influence of social media on projects now as well. So, what that does is it really has grown the volume of our comment quite substantially that we will receive on a project to this point. We anticipate on some projects now over a million comments coming at us, and so that’s where we have to really manage the workflow of that high comment volume, and that’s why Cameron can explain how the steps for that become really key.
Yes, so just going into the workflow a bit. So, the comments were submitted to AECOM, and then they’re delivered to HaystackID’s secure environment, and then we were able to analyze the comments using textual near-duplicate identification, and so for those of you who are unfamiliar, basically, there are several tools in the eDiscovery world and otherwise whereby documents are analyzed by their text, and they’re essentially organized from largest to smallest, and they take that largest document, and they compare the smaller documents to it and try to determine, OK, are there any documents that are at least 80% textually similar, and 80% is the lowest threshold that you can set for the tool that we’re using, but you can change that threshold to 80, to 90, to 98, just depending on what you’re seeing, and it may be important to set up several different TND, or textual near-duplicate sets, to basically run the system a few times. So, you can run it at a threshold of 80%, you can run it at a threshold of 90%, and perhaps at 95%, and what you’re really trying to do is you’re trying to find the quickest way to find those unique, substantive comments and/or just find an exemplar of the perhaps hundreds of thousands of public comments that you could receive, that are basically coming from a single form letter campaign or multiple form letter campaigns.
And one thing that was really nice about this process is that Tara and her team knew the players in the game, so they would give us a heads up that please be on the lookout for this sort of form letter campaign or that sort of form letter campaign from each particular, let’s say, environmental group that was going to submit a comment to the process, and so we would – upon receipt of new comments, and this is just on a rolling basis, we would rerun our textual near duplicate analysis, and we would try to identify as many textual near duplicate groups as we could. But you need to be careful because just because a document is in a textual near-duplicate group with another document does not mean that it’s exactly the same. Depending on where you set that threshold, it could have 80% similarity, 90% similarity, and you need to make sure that the documents are truly alike, and so the higher you set that threshold, the more certain you can be that those documents are alike, but you also need to be on the lookout for additional comments that might be substantive and might require a response. And so, whoever is doing that analysis, in this case that fell to me, needs to be careful and cognizant that there could be additional substantive comments that you need to be on the lookout for, and so you need to lean on the eDiscovery tool, and the experience of the analytics/review, let’s say, helper or consultant, in this case, to make sure that they’re not missing anything.
So, one way that documents are analyzed, as I mentioned, was by size, and the field is extracted text data length. So, essentially, if all of the documents have the same extracted text data length, along with some cursory sampling, you can usually confirm that, OK, all of these comments are all the same, and so we only need to look at one exemplar, but where the extracted text data length was different, and certainly at the high and low range of the extracted data lengths, you need to just make sure that there are no additional substantive comments, and if there are comments, it is helpful to have someone with some substantive expertise in order to just verify that there are no additional substantive comments. So, I was able to do that and try to make the process as quick and as painless as possible, in order to reduce the number of comments that required eyes-on review, and ultimately, a response by our AECOM experts.
In addition to that, where I could not determine whether a document contained a substantive comment, then I would always defer to the experts, and so it’s important to make sure that whoever is doing that process, if this sort of process is going to be followed, has some sort of consideration for that limitation of the system itself. And so, once you’re able to identify all of the different textual near-duplicate groups, we started naming them and organizing them. So, basically, there were some form letter campaigns that came in where the content or the substantive comments of the campaign was the same even though the organization may have changed, whether a sentence structure was changed, sentences were moved around, but the content itself, the substantive content that required response as part of the NEPA process did not change, and so there, we were able to group them together, and of course, keeping track of the names of the groups, the number of submissions within each group, and then of course, making sure that there was a representative sample from that group that ultimately made it to review by our review team, and then ultimately by the AECOM experts as well.
Cameron, I would also note that many times we got a small number of documents that we later learned were form letters, and it was important for you, working through the analytics, to also be in contact with the review team, because sometimes they identified repetitive documents that hadn’t made it through your analysis yet, they hadn’t risen to the number of documents that would hit your threshold. So, we had our team also trained to look for repetitive comments, and draw attention to those as needed, and then also evaluate any sort of varying substantive comments or varying comments to determine whether they contained anything substantive or not.
Absolutely. So, there’s a threshold by which my analysis would end based on the number of documents in any textual near-duplicate group, but as Abby just mentioned, it’s very important to keep the review team involved, and so the reviewers had copies of the various form letter campaigns. So, they could take a peek and say, OK, is the comment I’m looking at now a lot like, let’s say, the form letter number 10 that we’ve already identified? If so, then we were able to basically set that document aside and basically loop that into the textual near-duplicate group and/or the form letter campaign that was already identified, such that it did not invite or require additional analysis. Thank you, Abby.
So, a bit more on workflow and content analysis and how we collaborated as well, too. From my perspective at AECOM, from my team in leading it, what we found is, as we’re starting to receive public comment, we needed to staff so that we had people from our various disciplines who were able to look at early comments that we were receiving, so that we could give Cameron, Abby, their team a heads up of these are the themes that we’re seeing right now, and then also supplementing on our team as well someone that could also speak the IT language as well with Cameron and Abby’s team, is really key as well that we see, so that we know as we’re batching them the data, they’re receiving it as well. Cameron?
Yes, agreed, and it’s important to note that this is a circular and iterative process. And so, the whole of communication and comment analysis and reporting, we’re providing reports, AECOM and they were helping us to, basically, make sure that the reporting that we were providing was giving them the information that they need, so that they can ultimately provide an accurate report. Another important consideration is the data process and validation. And so, that’s where, again, having multiple eyes and having the experts look at the reporting that we were providing, and also just to make sure that the process that we were following is going to meet the standards of the NEPA process itself is very important as well.
That’s exactly right. And we, as part of our analysis by the review team, we were breaking sometimes longer comments and sometimes even quite lengthy papers and submissions into smaller substantive groups that were then sent to Tara’s team to analyze whether or not a response was needed. And that was a process that we sometimes hit those breaks exactly right, and sometimes we needed to revise the breaks after getting feedback from Tara’s team, and having that communication and that validation process in place made that system move much more smoothly than it would have otherwise.
I would just add to that, one of the features that our team really appreciated was the real-time of it in with regards to reporting, because we’re able to see what we’re getting a comment on, particularly if we’re being asked by clients as well, it was a simple phone call to Cameron and Abby to say, “Hey, can you give me some metrics on this real quick”, so that we were able to answer questions back to our clients, and then as well with regard to validating the data process as well, making sure that if someone submitted a cover letter, we also had the 500 pages of documentation that we could confirm were being looked at as well at the same time.
Moving on to Documenting the Process. So, quality control is important for everyone, pretty much regardless of what you do, but quality control was important for us in two ways. So, really, there’s the review team and there’s also the review process. And so, being in managed review, we always are cognizant of the quality of the work product that the reviewers are performing, and we needed to be able to continue that to make sure that our reviewers were accurately categorizing documents and delineating the material that required additional review by the AECOM team. So, that was part one of the quality control.
But there’s also the quality control related to the review process itself. So, making sure that the process that we’re following matches the expectations of AECOM, and ultimately, AECOM being our client and our subject matter expert to make sure that we are going to meet the requirements of the NEPA process itself.
And so, when it comes to quality control, part of that is what tool are you using. And so, the nice thing about eDiscovery tools is that they are literally designed to collect and store both objective data… so, OK, who is the commentor? When did the comment come in? What source was the comment received in? Was it a hardcopy comment? Was it an email comment? All that information is important to track from an objective level. But you also need to keep in mind the substantive nature, and the substantive tags that are going to be applied by those analyzing the comments. So, does this comment relate to fish? Does this comment relate to water? And of course, those are very high-level basic categories, but the idea remains that the eDiscovery tool, whichever tool you decide to use, or could use, is very adept at tracking those, and they’re also very nimble. So, if you’re working through things and you realize that we’re missing one aspect of the comments or we need additional data points in our reporting, you can usually pretty quickly, and without too much pain, add additional tracking, whether it’s by tags, or in the process itself to make sure you’re going to meet the requirements of the process.
And so, in addition to that, eDiscovery tools have a knack for tracking all the breadcrumbs, because people want to know, OK, who made the decision and when. And so, that’s where it’s helpful to use these tools and to lean on these tools because the data points are built into the tool itself.
As far as documentation and documenting how the analytics team is managing the comments, that’s also important. So, I’ve already touched upon the use of textual near-duplicate groups, and then once they’ve been identified, tracking, OK, what form letter campaign is this? What’s the agency? Or what’s the non-profit that submitted it? And what textual near-duplicate group – each eDiscovery tool identifies textual near-duplicate groups differently – but which groups fall into the same form letter campaign and how can we track them, and how many are there is also important. As well as understanding who submitted them. So, whether it’s from a particular agency or a particular non-profit, or whether it’s from the individual commentors themselves, all of that needs to be tracked and documented.
So, we also needed to be concerned about quality control on the review end, and there, having continuity on the team – I mentioned earlier that we started with a small team, we just had a couple of people, but having that experience, especially as we grew the team, became invaluable. And those early team members had a couple of months to really develop a solid understanding of what we were looking for in terms of substantive comments and they were able to then work with new team members that came on as we increased the volume of comments that were coming as we approached the end of the deadline, so that we could keep on the same page as a team, and also be on the same page with AECOM’s understanding of what – AECOM’s requirements of what was needed in terms of a deliverable from our team.
Tara, did you have anything to add to documenting the process or would you like to move forward?
I did. I just wanted to talk about how key coordination really was amongst our two different firms on this process, particularly when we got to a point in the project where we were dealing with – were getting an incredible volume of data and we had daily check-in points at, really, those periods of the project that became really key in just helping us manage the data and knowing that quality control was being applied to that management process as well.
So, when we first started getting public comment, as I said, my teams out there were interfacing with Federal agencies and the public and getting them, and what we start to see, right off the bat, is like we said, who the players are, where we’re getting it from, and that, at which point, we’re moving the data to HaystackID, they’re starting to process where they are starting to categorize and sort comment.
One thing that we liked to see early in the project is what the hot button issues are. And the reason for that is our team wants to know what the public is most concerned about, usually, right off the bat. And sometimes, we do get a surprise, something we thought was going to be a hot issue isn’t a hot issue, so we’re trying to track that with the data that we are seeing in early identification coming out of the metrics of Cameron and Abby’s team.
One thing that we really appreciated from their team as well is we are out there interfacing with the public, and sometimes we’re out in communities where the project that might not be too popular in certain areas, and AECOM, our number one priority, is the safety of our staff when we are working in field situations as well. So, we were able to apply the analytics to particularly look for any threats that our staff might have been getting with regards to their safety, that might have been hidden inside public comment, so that we could be aware of any potential threats while we were out and about. That was something that I know that AECOM really appreciated in the sort of public comment that we were getting.
And you may notice that under categorization and sorting, there’s a reference to topics and sub-topics, and so I think from a review perspective, this is very important. So, you have to understand the expertise and/or the limitations and qualifications of the people that you are working with on this project, and sort of where the line needs to be drawn between the review team and, let’s say, the experts. And so, being in document review, we hire attorneys who are adept at and have experience at learning new material and learning that new material and applying guidance and standards that are provided by the client, in this case, Tara and her team. But it’s also very important to understand that there are limitations that those reviewers have. The reviewers are not experts in land or water or fish or anything like that.
And so, one thing that was very helpful from the outset is that (1) we designed a process and workflow by which we would have overarching topics, and then we would have sub-topics that Tara and her team, who are consultants and experts in this specific field, can really address. And so, it’s important to communicate with each other and to know if you are going to bring in review support to help cut down on the number of comments that require by the experts, which is essentially what the textual near-duplicate identity was focused on, but even thereafter. As the individual comments were broken down by main topics and then, ultimately, by sub-topic, it’s important to train up your team the best you can.
In this case, when you have a first line, which is the first line were the reviewers, the review team, the contract attorneys that we hired, they’re sort of the first line of defense, but then after that, we handed over to the true experts. So, I would encourage anyone who is in this sort of work or is thinking about sort of bifurcating the process in order to make it easier and/or to cut down on the amount of time that the experts have to spend reviewing comments, to just be cognizant of how complex the substantive issues are and when that line can be drawn between, let’s say, the review team and those experts at – whoever the partner is, in this case, AECOM. So, keep that in mind.
And then also, it was very helpful too in that Tara and her team and interacting with the community, and just all the individuals and participants that were going to be involved in participating in the comment process, she had a great understanding of names and actors in the business. Basically, who we should be on the lookout for. So, we were given names of individuals or agencies or non-profits, so we were always able to say, “OK, have we received Jane Doe’s comment yet? No, please keep checking next week”, because we know that Jane Doe intends to comment and that their comments are going to be substantive and they’re going to require a lot of, let’s say, high level thought and analysis on the part of AECOM to make sure that those comments are well interpreted and appropriately reviewed, and so that was very helpful as well. Because then we could get that information to Tara and her team as soon as possible, because a lot of the comments were very long, were very dense. And so, on the review side, we’ll do as much as we can early on, but really, we want to get that information to Tara and her team so they can start their process in earnest as soon as possible.
I think it’s also worth noting that the process of… we did come in with a number of categories that came from the AECOM team, but as our team was working through documents, there would be a handful of times where we would suggest, maybe a particular category had a large number of comments, we would suggest possibly breaking that into a sub-category. So, there was some collaboration in terms of determining maybe we can identify some ways to better focus in on particular substantive comments, and because of being experienced and identifying the kinds of documents that would need further substantive evaluation by the AECOM team. They were also able to give suggestions that we hope were helpful to Tara’s team.
Right, and Abby, to circle back to it from there too, with regards to real-time reporting, that was one of the real advantages that we saw of this process. It allowed us to really start to write reports in real-time, we didn’t have to wait for the comment period to end, because the data was being processed while we were in it at the same time. So, that was something that our clients saw, particularly, as a real advantage in this as well, particularly with regard to just being able, also, to say, “How many comments are we getting on this topic, on water? Or how many comments are we getting on this, on wildlife concerns or how many comments regard to environmental justice are we getting?” And it was a real advantage in being able for us to respond to our clients, and then also, to the requests we were getting, frankly, from public affairs officers within Federal agencies as well.
And so, ultimately, what happens with those comments?
So, with regard to tracking and analytics and indexing, comments are all boiled down, basically, into a draft report that eventually becomes the final report and is a part of the record of decision on the NEPA process. And what we want to make sure, as we were going through the NEPA process is, have we identified the concerns that were brought up in the comments, and then what we want to do is be able to track who made the comment, how do we break it down, so that if there is a challenge later on, we can go back and say, “Yes, we got John Doe’s comment and this is how we looked at it, and this is how we responded to it as well”.
When we were talking at the beginning about legal challenges, that’s one of the biggest legal challenges we see right now in the NEPA process with regard to, did we take a hard look at what the concerns were, and did we respond to them adequately when we moved forward then in the process.
And so, that’s why really being able to track where the comments are coming from becomes super key, particularly when you’re dealing with comment volumes, hundreds of thousands, getting up towards a million, really. Cameron.
Abby, can you talk about tracking as far as comment author and the like.
Yes, we had a number of different workflows to be able to capture that sort of information. Some comments, of course, they were submitted electronically, and so they had, not all, but many had a name tied directly to the comment already as part of the metadata that we received. In other cases, we had some teams of people that were going through handwritten comments to make sure that we were capturing names from those comments and attaching them to the files. And then we also made sure to go through the form letter submissions that we got, both to assess form letter submissions for substantive content, as well as to make sure that we were identifying the names of any submitters who had taken the time to send in a comment.
And it can be difficult to track when you’ve got, let’s say, thousands upon thousands of names, but we were able to develop a process by which we turn comments into, essentially, Excel files that had the names of all the commentors.
Another thing to consider is are any of these comments sort of repetitive. One thing we noticed along the way was that we believe that when commentors were submitting their comments on the website, they weren’t always getting, let’s say, a confirmation that their comment had been submitted. So, again, using a tool that has all the objective data, so it was tracked upon submission and then, ultimately, ended up in the eDiscovery tool that we were using, so that we could identify, are these additional new comments or is this the same comment coming from the same person who is just confused and didn’t know that their comment had been submitted. So, again, it’s nice to be able to lean on the tracking that’s there to make sure that you’re responding to all the comments as is needed, and then also that you’re not spending time on, let’s say, just duplicate submissions.
Moving on to Best Practices.
So, again, kind of looking at how overall workflow started. Going back to kicking off the project right at – even talking to the team when we were just proposing on the project of how we were going to do this, particularly knowing that for a particular project that we were working on that we could have gotten upwards and over of a million comments, and really setting up the expectations of who was going to do what and how we were going to do it became so key early on to the success of working together, and keeping things moving.
And that, going back, our center bullet, there again, the Daily Tracking Threat and Reporting, that we were then able to move out to our team and to the people we were working with. Data Validation, a really key step in the process to see what wasn’t working, and I’ll let Cameron talk to that specifically a little more.
So, Data Validation, Proper Documentation, and Managing Deadlines. So, as far as data validation, have we received all of the comments. So, it was very important for us to work with AECOM from a technical perspective to see how many documents they had received that were submitted, how many documents did we see in our eDiscovery tool when we ingested them there, and then we were only – we used a second tool to help split up the comments into their each, sort of substantive aspect, by topic or sub-topic. And so, it was important for us to make sure, OK, have we captured all of the comments? Step one. Step two, have we captured all of the comments that have substantive versus non-substantive comments? All of those that are substantive, we need to make sure that those are analyzed as well. And so, regardless of what tools you’re using, you need to make sure, obviously, that you’re tracking all of the comments and that you’re giving a good response and considering all of the comments if they are substantive in nature and you’re appropriately breaking down the topics and sub-topics.
And managing deadlines can be very difficult in this process. Again, Tara and her team did a great job of letting us know their expectations as far as agencies or non-profits or individuals that would be submitting comments, and it’s very important to be nimble and to have infrastructure in place so that if, all of a sudden, at the end of the comment period, you do have a deluge of comments that come in, that you’re able to quickly and efficiently analyze them.
And for the most part, the comments that we received towards the end were related to form letter campaigns, so again, we were able to take just one exemplar and use that as the representative for that entire population, but you need to make sure you’re not missing any additional substantive comments that were either typed in or handwritten on any of those form letter campaigns, and then you also need to make sure you track those really large comments, because those can take some time.
And so, working in a review environment, you get used to and are adept at adding team members as needed to make sure you meet deadlines, and that’s where it was important, throughout the process, to make sure that we kept a good group or a good solid base of subject matter experts, at least internally, in order to respond to the comments that we were receiving, enough work to keep them busy, and to keep them engaged, and also keep their subject matter expertise on-board, such that a the end of the process or at the end as the deadline nears, if those people commenting decide to submit many, many comments, that you have the infrastructure in place to either (a) review those comments or (b) add team members to review the additional comments as well.
Abby, did you have anything to add about best practices?
I don’t think so. I think that you covered just about everything that I would have said, so I think I’m good.
Sounds good. So, moving into how can we help you and questions.
I do have one question so far, and this question is for Tara. So, Tara, with the change in administration, looking into 2021 and 2022, do you see an increase in NEPA review work and how do you see the industry changing and evolving as time goes on?
So, yes, very much so. I think you can’t track the news right now and not see mention of a very large infrastructure package proposed by the new administration, and lots of triggers for NEPA in there, particularly looking at projects that are wind, tidal, green energy-related. What we see also too is a really big trend in NEPA work, particularly with the new administration, is projects that are really going to have a focus in NEPA looking at environmental justice and social justice, as the bigger concerns as we start to build out infrastructure packages. And so, we see it getting pretty busy, planning efforts I think would commence probably back end of this year, early part of next year with regard to environmental assessment, environmental impact statement, NEPA work. So, we see it growing pretty fast and pretty furious, at least in the next year and a half ahead.
Thank you, Tara. And thank you to the audience for joining us today. If you have any questions, please feel free to reach out to HaystackID or AECOM. As we all know, this is going to only continue, as Tara just suggested, this is only going to continue, and we expect the amount and volume of comments to continue when it comes to these public comment processes. And so, we think it’s important to have good stewardship, to have a good company like AECOM, who are subject matter experts in this field, to lead that effort, and we think it’s also important to have good infrastructure in order to efficiently review and analyze the comments, and that’s where a company like HaystackID can come in to help support as well.
Thank you very much to the audience who joined us today. Thank you to all of my co-presenters. Thank you again to Tara and Abby for your participation and thank you to all those who will listen to this recording in the future.
Have a great day everyone.
2021.04.15 - HaystackID - Public Comment - Webcast Presentation-Compressed