EC Post: Classification and Secrecy.. and Algorithms?

Introduction:

So this lecture was primarily on secrecy and classification. The speaker was Hugh Gusterson, who has a legit resume, but you can look him up if you’d like. His base in secrecy comes from cultural anthropological research at Livermore nuclear laboratories in California. First off, I’ll say that I did not find his talk very rhetorically appealing. The speaking rate was slow, the slides included lots of text, and much of the content built credibility. I already thought he was credible, and I would have preferred much more paraphrasing and directness that would have made his logical arguments easier to digest. Additionally, he did not clearly state his main argument. I’ll give a quick summary, and then give my review of the presentation.

 

Summary:

He began by talking about cultural implications of secrecy, such as how clearance can give a sense of privilege, or how secrecy can lead to a lack of intimacy in his marriage. He included a number of anecdotes about lifestyle as a weapons scientist, and brought up some unique perspectives, such as how can one reflect on the ethical implications of their job, if they don’t have enough clearance to know what the weapon can do or who it will target. He brought up an interesting point about how most scientists he asked about the ethical implications of their worked, did not respond with a lack of reflection as he expected. Instead they brought up the principle of deference and their reason that nuclear weapons are not made to kill, but made to scare.

He then talks about classification, particularly overclassification. He gave three laws of classification that often lead to overclassification: avoid embarrassment, play it safe, and classification creep. Essentially, if there’s a screw-up, cover it up. If there’s any doubt, classify it. And classification of secret information creeps into the classification contextualized information. In this section, he also mentions that the amount of classified information is growing at a faster rate than the amount of public information. He then progressed to his main point: classification is an inherently expansionary process that is driven by a cycle of crises, secrecy and overclassification, secret build-up, leaks, and partial declassification. He concluded by venturing into surveillance, but didn’t really flesh out the topic.

 

Opinion:

So as I stated earlier, I did not like the speaker too much. His main point of classification being an inherently expansionary process was not introduced until the last 5 minutes of his ~60 minute talk. I agree with his main point, but his speech could have lasted considerably less time. I think much of my distaste for the presentation came from my similar distaste for academia, an area that I see as often trivial and slow-moving. Thankfully, the majority of questions centered around the practical side of his talk, and not on the philosophical or anthropological sides. The questions brought up good points as to how there is such a mass of classified documents that there is no looking back – no way of declassifying all the information that should be declassified. Something that I would have liked to hear more about was the negative implications of overclassification, but that was perhaps seen as common knowledge, and not brought up in the conversation.

I also would have preferred more of a focus on classification and secrecy in modern times. There was a short mention of leaks, but there was no mention of hacking. Even when I asked a question about the role of algorithms in classification, it was quickly disregarded. I found the conversation to be a bit brick and mortar. Where I wanted talk of digital information, there was talk of a red “Top Secret” stamp. Its 2016! I hope the waxy, red “Top Secret” stamp is no longer around, but it sounds like stamps are still in use.

 

Relation to our class:

It looked like the speaker was going to dive straight into topics covered by the privacy and ethics group, when he introduced surveillance, but the talk quickly moved back to classification. I think that the two often overlap, but are distinctly different: surveillance implies that one party does not know data is being recorded on them, classification implies secrecy, but not necessarily surveillance. Our class focused on surveillance, but I think that both topics are players in a larger argument over privacy vs security that our society should be having now. This topic of classification ties back into the reading we had on Edward Snowden. Snowden leaked classified documents, but were some of those documents overclassified? Perhaps his main purpose was to make public the type of data that the NSA collects so that the nation could have an argument over privacy vs security. If that information is pertinent to public consent and voting, should it have been classified in the first place?

The tie-in to our class that I wanted to see was algorithms being used to classify or declassify information. When covering predictive analytics, the book, Big Data: A Revolution that will Transform How We Live, Work, and Think introduced to the framework of three types of big data companies: Data owners, intermediaries, and big data mindset companies. Governmental organizations can be viewed as perhaps the largest of big data owners. The same questions of ethics that come into play with secrecy of big data companies are only more important when referring to governmental organizations because they are public. If the government were to move its classification process out of human hands and into the control of algorithms, perhaps the problem of overclassification could be alleviated, and perhaps more transparency could be achieved, without simultaneously releasing important government secrets.

Leave a Reply

Your email address will not be published. Required fields are marked *