Boston Marathon Bombing Coverage–02 Sep 14

Gelzinis: Dzhokhar’s pal lost in translation
Peter Gelzinis
Boston Herald | August 22, 2014

GUILTY: Dias Kadyrbayev, a college friend of Boston Marathon bombing suspect Dzhokhar Tsarnaev, with his defense attorney Robert Stahl at his left, pleads guilty yesterday before Judge Douglas P. Woodlock for impeding the investigation into the deadly attack.

Dias Kadyrbayev came to court yesterday flanked by his lawyer and his translator, a woman who looked a lot like my third-grade teacher.

She planted herself by Kadyrbayev’s left shoulder and only sprung into action on those occasions when the 20-year-old Kazakh’s brow wrinkled, or U.S. District Court Judge Douglas P. Woodlock would ask, “Do you understand me?”

The kid nodded politely, answered, “Yes sir,” and “No sir,” but never once, “I understand, your Honor.”

This bit of lost in translation became all the more curious when Assistant U.S. Attorney Stephanie Seigmann rose to read six pages of stipulated facts about Dias’ role in trying to get rid of evidence that allegedly ties his UMass Dartmouth buddy, Dzhokhar Tsarnaev, to the marathon bombings.

The most chilling part, the one without a trace of an accent, was the fateful text exchange Dias had with Tsarnaev on Thursday, April 18, 2013, the night he saw his friend in those pictures the FBI flashed around the world. There was nothing “foreign” about it.

“Yo, bro, u saw the news?” Dias asks.

“Yea bro, I did,” Tsarnaev responds.

“For real?” texts Dias

“I saw the news,” Tsarnaev replies, then follows it up with a warning, “Better not text my friend.”

Though Dzhokhar tries to lighten things up with a quick “LOL,” Dias asks, “U saw yourself in there?” meaning strolling with backpacks among all those unsuspecting strangers on Boylston Street.

Dias then adds, “ahaha…hahaha.”

What kind of virtual laugh do you suppose that was? As it turns out, it became Dias Kadrybayev’s entry into a situation that would have him copping a plea to obstruction of justice. He would find himself up to his ears in a vicious terrorist incident.

When the brief courtroom proceeding was over, Kadrybayev’s lawyer, Robert Stahl, told reporters that he was convinced his client had no role in the planning of the bombing, or that his friends might be involved.

That might well be true. But when it comes to this horrific act, joining in the cover-up is just as bad. I rode the elevator down yesterday with a sergeant from the Somerville Police Department, a woman who politely declined to say anything beyond, “I needed to be here.”

This cop came to court yesterday to see a kid admit that he obstructed the justice that might well have prevented the murder of MIT police Officer Sean Collier, who had just learned he was going to join the Somerville police.

That text conversation, which is bound to play a role in Dzhokhar’s upcoming trial, ends with Tsarnaev telling Dias “If yu want yu can go to my room and take what’s there…” He ends with “Salam aleikum.”

Dias responds with: “what’s wrong with u?

He should have taken that question to the police.


Boston Marathon bombing: Dias Kadyrbayev guilty of obstructing justice
Prosecutors to ask for seven years or less, but judge will review deal
Associated Press | Aug 21, 2014

Attorney Robert Stahl speaks to media outside federal court in Boston, after his client, Dias Kadyrbayev, pleaded guilty to impeding the investigation into the deadly attack in April 2013. (The Associated Press)

A friend of Boston Marathon bombing suspect Dzhokhar Tsarnaev pleaded guilty Thursday to impeding the investigation by removing incriminating evidence from Tsarnaev’s dormitory room several days after the deadly attack.

Dias Kadyrbayev, 20, admitted in federal court that he removed Tsarnaev’s laptop computer and a backpack containing fireworks that had been emptied of their explosive powder from Tsarnaev’s room.

Twin bombs placed near the finish line of the 2013 marathon killed three people and injured more than 260.

Under a plea agreement, federal prosecutors said they would ask for no more than seven years in prison. The agreement allows his lawyer to argue for a lesser sentence. The Kazakhstan-born Kadyrbayev also agreed not to fight deportation after he completes his prison sentence.

Judge will review plea agreement

U.S. District Judge Douglas Woodlock set sentencing for Nov. 18 but did not immediately accept the plea agreement, saying he first wanted to review a report that will be prepared by the probation department.

Kadyrbayev’s decision to plead guilty came just two weeks before he was scheduled to go on trial and a month after his friend and co-conspirator, Azamat Tazhayakov, was convicted of identical charges by a jury.

During Tazhayakov’s trial, prosecutors described Kadyrbayev as the leader in the decision to remove the items, but said Tazhayakov agreed with the plan. They said Kadyrbayev was the one who threw away the backpack and fireworks, which were later recovered in a landfill.

Kadyrbayev’s lawyer, Robert Stahl, said his client made a "terrible error in judgment that he’s paying for dearly."

Stahl emphasized that Kadyrbayev — a native of Kazakhstan who came to the U.S. in 2011 on a student visa — "had absolutely no knowledge" that Tsarnaev and his brother, Tamerlan Tsarnaev, were planning to bomb the marathon and was "shocked and horrified" when he learned they were suspects.

He said Kadyrbayev, who was 19 at the time, "now understands he never should have gone to that dorm room, and he never should have taken any items from that room."

Backpack, laptop taken from dorm room

His plea agreement with prosecutors does not make any mention of him agreeing to testify against a third friend who was also charged. Robel Phillipos is accused of lying to investigators about being present when Kadyrbayev took the items from Tsarnaev’s room. Phillipos is scheduled to go on trial next month.

The backpack, fireworks and laptop were taken from Tsarnaev’s room hours after the FBI publicly released photographs and videos of Tsarnaev and his brother as suspects in the bombing.

Prosecutors said Kadyrbayev exchanged text messages with Tsarnaev after seeing the photos, and Tsarnaev told him he could go to his dorm room and "take what’s there."

Prosecutors said the fireworks had been emptied of explosive powder that can be used to make bombs.

Tamerlan Tsarnaev was killed in a shootout with police several days after the bombings. Dzhokhar Tsarnaev has pleaded not guilty to 30 federal charges and faces the possibility of the death penalty if convicted. His trial is scheduled to begin in November.


Dzhokhar Tsarnaev’s College Friend Pleads Guilty
If a judge accepts the agreement, Dias Kadyrbayev, facing obstruction charges for disposing of Tsarnaev’s backpack after the Marathon bombings, will serve a maximum of seven years.
Susan Zalkind
Boston Daily | August 22, 2014

In a major turnaround, Dzhokhar Tsarnaev’s college friend Dias Kadyrbayev pleaded guilty to charges he obstructed the investigation into the Boston Marathon bombings in a court hearing on Thursday.

Wearing a blue shirt and jeans, 20-year-old Kadyrbayev admitted he knew Tsarnaev was a bombing suspect when he went into Tsarnaev’s UMass Dartmouth dorm room and took his laptop and a backpack containing fireworks, Vaseline, and a thumb drive, and then threw the backpack into a dumpster. His guilty plea is the result of an agreement worked out between prosecutors and the defense, whereby Kadyrbayev will only serve a maximum of seven years instead of the potential 25 if found guilty. Judge Douglas Woodlock must still approve the plea agreement for the deal to move forward.

The prosecution said it took 25 agents two days to search through a landfill to find the backpack, and once they did, the items and the backpack had been altered.

“Is it all true?” Woodlock asked.

“Yes,” said Kadyrbayev, with his head down.

He stood solemnly when entering his guilty plea, a shift from his typically jovial mood—he started the hearing by flashing his attorney Robert Stahl a toothy grin. Despite the serious nature of his charges, Kadyrbayev comes off as a bit of a class clown. He has already taken the stand in attempt to suppress statements he made to the FBI on the grounds that he did not understand his Miranda rights. Expert witnesses argued that his reliance on slang masked his inability to comprehend complex phrases. Back in June, his first word to the court was, “Sup?”

Stahl later told reporters that Kadyrbayev has spent the past year alone in his cell, reflecting on his actions. “He understands he should not have gone to that room,” he said. “He did not do so out of malice.” None of the Tsarnaev’s friends facing charges are accused of knowing about the bombing beforehand.

Kadyrbayev’s plea is just the latest in a series of legal happenings stemming from Tsarnaevs associates, coming just a month after his friend and co-conspirator Azamat Tazhayakov was found guilty of obstruction after agreeing with Kadyrbayev to remove and throw out Tsarnaev’s backpack. He could face up to 25 years.

Tazhayakov and Kadyrbayev are both from Kazakhstan. They became friends with Tsarnaev in 2011 during their first semester at UMass because they all spoke Russian and, according to friends’ testimonies, bonded over an interest in video games and weed. In an opening statement, Myers argued that they originally went to Tsarnaev’s room get his marijuana.

Missing from the courtroom yesterday was Robel Phillipos, another friend of Tsarnaev’s who allegedly was in the dorm room when Kadyrbayev and Tazhayakov took the backpack. Phillipos is not facing charges of obstructing justice but is facing one count of lying to the FBI. His trial is set for next month.

Phillipos grew up in the same Cambridge apartment complex as Stephen Silva, who was arrested last month for selling heroin and for possessing a firearm with an obliterated serial number in February 2013. The Ruger model P95 is believed to be the same gun the Tsarnaevs allegedly used to shoot and kill MIT officer Sean Collier.

“He basically let him use it but having no idea what he was going to do with it, and next thing you know, he’s a terrorist,” said a friend of Silva’s who asked not to be named.

Silva was friends with Dzhokhar and has an identical twin Steven Silva, who was reportedly even closer to the Tsarnaevs. Stephen Silva was arrested in November 2013 and told law enforcement, “I smoke weed because my friend is the bomber.” Silva’s friends tell Boston magazine Silva grew increasingly depressed after the bombing. His next court hearing is set for October.

Two additional Tsarnaev friends, Khairullozhon Matanov and Konstantin Morozov, were detained in separate incidents on May 30 of this year. Matanov is charged with three counts of lying to federal authorities and two counts of obstructing justice. His trial is set for June 2015.

Morozov was detained on immigration charges. His attorney Carlos Estrada says Morozov was applying for asylum and was detained after FBI agents asked him to become an informant. Morozov refused.

Tsarnaev’s capital case is set to start in November. The emerging theme from the testimony and documents of Tsarnaev’s associates’ cases is the younger Tsarnaev’s cool demeanor in the days after the bombings. In a video released in Tazhayakov’s trial, Tsarnaev appears to smile nonchalantly on the way to the gym, just a day after the bombings.


Legal analyst Tom Hoopes discusses Kadyrbayev plea
7News Boston WHDH-TV | Aug 21, 2014

BOSTON (WHDH) – Dias Kadyrbayev, a friend of accused Boston Marathon bomber Dzhokhar Tsarnaev, pleaded guilty to obstruction of justice and conspiracy charges Thursday.

Legal analyst Tom Hoopes weighed in on the hearing. He said the outcome of the Azamat Tazhayakov’s trial likely influenced Kadyrbayev’s plea.

"I think probably if they tried this case, exactly the same thing was likely to happen, at least that’s what the defendant and his lawyer thought. The prosecution was going to call all kinds of witnesses and this defendant wasn’t going to have anybody to call, and in this environment, the jury was probably going to find him guilty, and as a result of all that, he was going to do a longer sentence," he said.


Guilty plea opens evidence vs. Tsarnaev
Experts: Prosecutors must prove conspiracy
Bob McGovern
Boston Herald | August 22, 2014

Evidence dug up as part of yesterday’s guilty plea by a former college roommate of Dzhokhar Tsarnaev could be used against the accused Boston Marathon bomber if prosecutors can show they were part of a conspiracy to thwart investigators, according to legal experts.

“If people are considered co-conspirators, anything one says can possibly be used in the case of another,” said Peter Elikann, a Boston criminal defense attorney. “If these guys were doing anything to help Dzhokhar out, and he knew about it, they would be considered co-conspirators since they worked together to achieve a goal — to get rid of the evidence.”

Dias Kadyrbayev, 20, pleaded guilty yesterday in federal court to charges that he hindered the investigation into the deadly 2013 bombings. He could spend up to seven years behind bars if Judge Douglas Woodlock approves the agreed-upon plea.

As part of his plea, Kadyrbayev admitted to a series of facts, including a text exchange with Tsarnaev that occurred after the attacks.

One comment could show that Kadyrbayev and pal Azamat Tazhayakov conspired with Tsarnaev to hide a backpack and laptop that were key aspects of the obstruction charge Kadyrbayev admitted to.

“If yu want yu can go to my room and take what’s there (SIC),” Tsarnaev texted Kadyrbayev, after it became clear that Tsarnaev was involved in the twin bombings that killed three and injured more than 260.

The statement, which was made before Kadyrbayev and Tazhayakov raided Tsarnaev’s University of Massachusetts Dartmouth dorm room, could show that they were in a conspiracy to obstruct the investigation. If prosecutors prove the conspiracy, Tsarnaev’s words could be used against him as a co-conspirator, even if he isn’t indicted as one, according to an expert.

“As long as the government can establish someone is a co-conspirator in the charged conspiracy, they don’t have to be indicted,” said Brad Bailey, a criminal defense attorney and former federal prosecutor. “It is sufficient to label someone an unindicted co-conspirator. However, the government still must prove the existence of the conspiracy charged and that the unindicted co-conspirator was part of it.”

Tazhayakov was found guilty of obstruction and conspiracy charges last month. He faces up to 25 years in federal prison when he’s sentenced Oct. 16. Kadyrbayev is set to be sentenced Nov. 18.

A third friend, Robel Phillipos, is charged with lying to investigators.


Boston Marathon Bombing Coverage–01 Sep 14

The Tsarnaev Women Tell Chechnya’s Story
Julia Ioffe
The Moscow News | 23/07/2014

There were three important women in Tamerlan Tsarnaev’s life—five, if you count his sisters—and each is a window into the culture to which he seemed to cling in the final years of his life.

First, there is his aunt, Maret Tsarnaeva, a Chechen refugee from Kyrgyzstan and now a resident of Toronto, by way of the U.S. In a press conference the day her nephew Dzhokhar was being hunted in the streets of suburban Boston, Maret, with her rust-colored hair and silvery manicured nails, gave a magnificent performance. She was brassy and assertive, commanding the attention of the reporters calling to her with questions. “I’m lawyer from back home,” she declared, exhorting the FBI to prove to her that her nephews were responsible for the bombing of the Boston marathon. “How difficult is that? Give me evidence!” she demanded, flicking her hand into the air as if peppering the press with her disdain. She talked about her nephews, but also about her youth in Kyrgyzstan, where the Tsarnaev brothers spent part of their childhoods. As a Chechen, Maret said she had to prove her mettle, to go over and above her Kyrgyz and Kazakh peers because, unlike them, “I was not in my land.” Asked about Tamerlan’s radicalization, Maret acknowledged that he did indeed turn to Islam in recent years. “He started praying five times a day, but I don’t see what’s wrong with that,” she said. “You just say words, gratitude to Creator.”

Maret is the old Chechnya: secular, Soviet, severed from its roots, paranoid and cynical in its knowledge, acquired painfully and firsthand, of what a government can do to its subjects. When Maret talked about her nephews being framed, she knew what she was talking about: “Lawyer from back home” actually meant state prosecutor, a key actor in a judicial system that was in practice a political bludgeon, one that actively invented charges against potential opponents. Maret also talked about Islam as a thing that is both native and foreign to her. Islam was something into which she was born, and which, to her, likely, is a set of pleasant traditions and holidays that give her a sense of belonging to an old history. For someone who had a Soviet upbringing, being born a Muslim was akin to being born Chechen; it was just another mark of ethnicity, and, towards the end of the Soviet experiment, didn’t mean much more than having a non-Slavic name.

Enter her sister-in-law, Zubeidat Tsarnaeva, wife to her brother Anzor, mother to her nephews Tamerlan and Dzhokhar. You look at that old baby photo of Tamerlan from the late 1980s, and you see Zubeidat looking like a more sullen version of Maret. Her hair is uncovered and fashionably teased; her dress is secular, even stylish. At a press conference in Makhachkala, Dagestan, a quarter of a century later, she is a woman transformed, though the long, morose face is still the same. In between, she had moved from the wasteland that was nominally Buddhist Kalmykia, where Tamerlan had been born, to nominally Muslim Kyrgzystan, had another son, Dzhokhar, and two daughters, emigrated to America, gone to beauty school, married off her older son and daughters with uneven success, was arrested for shoplifting, divorced her husband, and moved back with him to her native Dagestan.

Somewhere along the way, Zubeidat found Islam in a way Maret never did.1 It is said that Zubeidat pushed Tamerlan toward the old faith when he started to lose his way, and it is also said that Mikhail Allakhverdov, the mysterious “Misha,” a Ukrainian-Armenian convert to Islam, had pushed Zubeidat or Tamerlan or both closer to Islam. And from there, Tamerlan and Dzhokhar seem to have moved on to more intense forms of the religion, including an interest in the radical cleric Anwar al-Awlaki. It is something that seems to have percolated through the house and into Zubeidat’s newfound faith: She told one of her customers that the September 11 terrorist attacks were an inside job designed to turn the world against Muslims. “My son knows all about it,” Zubeidat is said to have claimed . “You can read on the Internet.”

Zubeidat is the new Chechnya, and the new Dagestan. At the Makhachkala press conference, she is dressed in a long-sleeved black caftan, her face framed tightly by a black and white hijab. Her mourning is expressive and theatrical, almost Middle Eastern. She talks about how she regrets moving to America— “why did I even go there?”—about how she expected America to keep her children safe, but instead “it happened opposite,” she says, weeping. “America took my kids away.” If the Tsarnaevs hadn’t emigrated, Zubeidat contends, “my kids would be with us, and we would be, like, fine.”

That, in the new Chechnya and the new Dagestan, is highly unlikely. While the Tsarnaevs were in Kyrgyzstan and America, the region began to change rather violently. After the First Chechen War ended in 1996, Chechnya became a mix of lawless wilderness rife with violence and kidnapping, and pockets ruled by fundamentalist warlords, like Aslan Maskhadov. After a second war between Russia and Chechnya broke out in 1999 and dragged on for years, Vladimir Putin installed Ramzan Kadyrov as president of Chechnya. Kadyrov was the son of a separatist mufti and led a vicious militia that switched to the Russian side early in the second war, and become allied with the FSB.

Kadyrov, who now posts photographs of his devout family at play or going on Muslim pilgrimages on his Instagram account, is accused of grotesque human rights violations. He now rules Chechnya with a mix of terror and a torrent of money from Moscow. He has led Chechnya down the path of increasing Islamization. Women are now required to cover up, lest they be harassed by the authorities or, worse, subject to paintball attacks by Kadyrov’s modesty vigilantes. Kadyrov has also voiced his support of honor killings, a rather stark turn for the once secular republic. “Now Chechen women must wear hijab and long dress with long sleeves to go anywhere out of home. There have been many situations of the public humiliation of those who tried to resist,” a Chechen woman told me. She asked to remain anonymous for fear for her family’s safety. “The previous generation was under the radicalization of Wahhabi regime during 1996-1999, but the Wahhabis lost, they didn’t achieve the goal to cover all Chechen women with hijab. But now the government has achieved that goal. This young generation of radicalized girls and boys might be a real threat to the society in the nearest future.”

Even before this policy had firmly taken root, the region became a source of unique terrorism: the female suicide bomber. The first woman to detonate herself was 22-year-old Khava Baraeva, who, in 2000, drove a truck packed with explosives into a local Russian military base, killing three. She was going after the commander who had killed her husband. Other Chechen and Dagestani women followed her lead, blowing up military posts as well as civilian targets inside Russia. Two women, for example, simultaneously brought down two Russian airliners in 2004, killing 89, and two young Dagestani women blew themselves up in the Moscow metro, in March 2010, killing 40. Half of the terrorists who seized the Dubrovka theater in Moscow in 2002 were women, strapped with explosives. Experts estimate that up to 40 percent of suicide bombings originating in the region are perpetrated by women.

The women have come to be known in Russia as “Black Widows.” At home they are known as shakhikdi, the Russianized feminine form shakhid, or martyr. “A lot of the women in these radical Islamic groups, for example, in Indonesia, they don’t get personally involved in frontline warfare but they raise their sons so that if their father is killed, they can step right away into his shoes,” says Mia Bloom, a scholar at Penn State’s International Center for the Study of Terrorism, and author of Bombshell, a book about women suicide bombers. “Women act as the glue within the terrorist cell,” she explains. “The daughter of one cell leader will marry a cell leader in another area to create linkages, like in 15th century European courts. And the women are to make sure that their men stayed fierce.” Bloom adds that, though it’s hard to do this in the U.S., in conflict zones “the mothers will convey a certain ideology or worldview to the children.” Others, like Mariam Farhat, a Hamas activist, encouraged her sons to go on suicide missions, and publicly bemoaned the fact that she didn’t have more sons to send into battle.

Chechen and Dagestani women took it one step further; they went into battle themselves. It is a stunning paradox, given that at home they live in what Bloom calls “an extraordinarily patriarchal society—so much so that the women at the Dubrovka theater were wearing explosive belts, they were not the ones with the detonators.” The man is the means and the ends of a Chechen home. When a Chechen woman is married, she is not allowed to speak at the wedding. Often, her relatives can’t even come. It is a celebration of the man’s acquisition. “In a Caucasian family, where the man dominates, woman is raised to take care of the man and to sacrifice for the man,” the Chechen woman told me. “The Caucasian code of ethics requires the woman to be modest and quiet. But during the war in Chechnya I have witnessed so many times how Chechen women would step before tanks and armed soldiers, aiming weapons at them, if their men were in danger of being captured or killed. So, this socially required behavior changes when it comes to a life and death issue. Mothers are ready to sacrifice for their sons, sisters for their brothers, wives for husbands, and so on.”

Though Zubeidat refuses to accept her sons’ guilt—“No, never,” she said that day in Makhachkala—and though a Russian wiretap caught her talking with Tamerlan about jihad, it seems unlikely that she would strap herself with explosives and charge forth against the enemy. Chechen and Dagestani mothers usually don’t. And that’s where Katherine Russell comes in, especially after a woman’s DNA was said to have been found on a fragment of the bomb.

Russell, the daughter of a Rhode Island doctor, met Tamerlan at a night club, converted to Islam, and, after marrying the elder Tsarnaev brother, reportedly became more observant and began to pull away from her family. She went to work while her husband stayed home. According to her friends, he was often abusive, calling her a “prostitute” and hurling furniture at her. This too is unfortunately common in the culture: Tamerlan’s naturalization was held up when he faced charges for slapping his girlfriend; his father, in an interview with The New York Times, wondered aloud at the strangeness of this country, where “you can’t touch a woman.”

But unlike a black widow, and unlike Zubeidat and Maret, when her husband was accused of blowing up the Boston Marathon and then died in a shoot-out with police, Russell, the American, did not pick up arms, verbal or physical, to avenge her man. She walked away. His violent attack on the state did not bond her to him; rather, it seemed to rip her out of his orbit, to shame and terrify her where, had Tamerlan been a radical in Dagestan, it may have brought her a certain grief-tinged honor. Instead, Russell issued statements in which she expressed her ignorance of the plot—the DNA was found not to be hers—as well as her shock and her family’s grief for the victims of the bombing. Most tellingly, she declined to claim Tamerlan’s body. Instead, it was claimed by his sisters, who though Americanized and horrified by Tamerlan’s act, said they would give their brother a proper Muslim burial.


Boston Marathon suspect’s sister allegedly threatened to bomb boyfriend’s ex
Associated Press | August 27, 2014

NEW YORK –  Boston Marathon bombing suspect Dzhokhar Tsarnaev’s sister was arrested Wednesday on suspicion she threatened to bomb a woman who previously had a romantic relationship with her boyfriend.

Ailina Tsarnaeva, who lives in North Bergen, N.J., made the threat against an upper Manhattan woman via telephone on Monday, police said. She turned herself in at a Manhattan police precinct, and police charged her with aggravated harassment.



Several media outlets reported that Ms. Tsarnaeva told the Harlem woman she had "people who can go over there and put a bomb on you."

Officers gave Mr. Tsarnaeva an appearance ticket and released her pending a Sept. 30 court date.

A telephone number linked to Mr. Tsarnaeva was disconnected. Her lawyer, George Gormley, said he had left his office and would speak Thursday.

Ms. Tsarnaeva has been required to check in with Massachusetts probation officers since prosecutors said she failed to cooperate with a 2010 counterfeiting investigation.

Prosecutors said Ms. Tsarnaeva picked up someone who passed a counterfeit bill at a restaurant at a Boston mall and "lied about certain salient facts during the investigation."

At a hearing last October, Mr. Gormley said Ms. Tsarnaeva was pregnant with her second child and was unlikely to flee.

Ms. Tsarnaeva once lived in Cambridge, Mass., at an apartment linked to her brothers, Dzhokhar and Tamerlan Tsarnaev, who were the subjects of an intense manhunt in the Boston area in the days after the deadly April 2013 marathon bombing.

Records show Ms. Tsarnaeva now lives with a sister, Bella Tsarnaeva.

Dzhokhar Tsarnaev is charged with building and planting the two pressure-cooker bombs that exploded near the marathon’s finish line, killing three people and injuring more than 260 others. He has pleaded not guilty.

Tamerlan Tsarnaev died after a gunbattle with police.


Defense Seeks to Move Trial on Boston Marathon Bombing
NYT | AUG. 8, 2014

BOSTON — Citing “an overwhelmingly massive and prejudicial storm of media coverage” here, lawyers for Dzhokhar Tsarnaev, accused in last year’s bombings at the Boston Marathon, pressed their case this week for moving his trial to Washington.

In papers filed here in federal court, Judy Clarke, the lead defense lawyer, wrote in response to prosecutors’ arguments: “Although the government insists that Mr. Tsarnaev has not been portrayed in a negative light, ‘but rather [as] the sympathetic young man who appeared on the cover of Rolling Stone,’ the actual data show he has been portrayed as a monster, a terrorist, depraved, callous and vile. He is viewed as an outsider, a foreigner, disloyal and ungrateful.”

The defense team had already sketched out its arguments for moving the trial, which is scheduled to begin in early November. In papers filed in June, the defense said its research had found an “overwhelming presumption of guilt” in Massachusetts against Mr. Tsarnaev in the bombings of April 15, 2013, which left three people dead and more than 260 wounded. Mr. Tsarnaev has pleaded not guilty to the 30 counts against him, 17 of which carry the death penalty.

In filings on Thursday evening, the defense sought to bolster those earlier arguments with almost 10,000 pages of supporting documents. They included extensive analyses of news media coverage and community attitudes performed by Edward J. Bronson, a professor emeritus at California State University, Chico.

Mr. Bronson was part of the team that argued unsuccessfully for the insider-trading trial of Jeffrey K. Skilling, the former chief executive of Enron, to be moved out of Houston, where the company was based. The court in that case ruled that pretrial publicity did not preclude a fair trial.

The Tsarnaev case is more frequently compared to that of Timothy McVeigh, who was convicted in the 1995 bombing of a federal building in Oklahoma City, in which 168 people were killed. The court held that prejudice against Mr. McVeigh in Oklahoma was so great that he could not obtain a fair trial there, and it moved the proceedings to Denver. In that case, the federal courthouse where the trial would have been held had been damaged in the bombing, and waiting for repairs would have delayed the start of the trial.

In papers filed here, Mr. Bronson said the Tsarnaev case “is more like the Oklahoma City bombing case, where a whole state was found by the trial court to be biased, than the city of Houston in the Skilling case.”

Ms. Clarke, a staunch opponent of the death penalty, added that the marathon bombing “has been portrayed, and is likely perceived, as a direct attack on Boston, its institutions, its traditions and each of its residents.”

Mr. Bronson said his analysis of coverage by The Boston Globe showed that it had run 2,420 articles on the bombing in a 15-month period, a volume that he called “extraordinarily high.” The Globe’s themes, words, phrases and passages constituted inflammatory overload, he said.

Brian McGrory, editor of The Globe, which won a Pulitzer Prize for its coverage of the marathon bombing, said in response, “We believe our coverage to be fair, accurate and comprehensive, and will let our work speak for itself.”

It is not clear when the judge in the case, George A. O’Toole Jr., will decide whether the trial should be moved. The government will probably ask for time to respond to the latest filings.

Jeremy Sternberg, a former federal prosecutor in Boston and now a partner in the Boston office of the law firm of Holland & Knight, said the defense filings indicated that there were jurisdictions outside Boston, like Washington, that might be less prejudiced. But, he said, “they have not demonstrated that you can’t find a fair and impartial jury” in eastern Massachusetts.

Tsarnaev friend convicted of obstructing Boston bombings probe

Lawrence Crook III, Jason Hanna and Susan Candiotti
CNN | July 22, 2014

Boston (CNN) — A federal jury on Monday found a friend of Boston Marathon bombing suspect Dzhokhar Tsarnaev guilty of obstructing the investigation into the 2013 attack.

The jury found Azamat Tazhayakov guilty of obstructing justice and conspiring to obstruct justice, in connection with the removal of a backpack with potential evidence from Tsarnaev’s dorm room after the bombings.

Among the images released during the trail was this one of a backpack, alleged to have been taken from Dzokhar Tsarnaev’s dorm room and thrown in the garbage. The FBI says it later recovered it from a landfill. Azmat Tazhayakov is accused of helping ditch a laptop and the backpack believed to belong to schoolmate Dzhokhar Tsarnaev.

Jurors indicated in a verdict questionnaire that they didn’t believe a separate allegation — involving the removal of a laptop computer from the same dorm room — amounted to obstruction or conspiracy.

But his attorneys said they’ll appeal the verdict, maintaining that a different defendant was the one who removed the backpack and put it into a garbage bin, and that the jury was under pressure by a community upset by the bombings to find Tazhayakov guilty.

"He never took a backpack out of the dormitory. … We will certainly push that the evidence, and my client’s intent did not match up with the actions of the case," Tazhayakov attorney Mathew Myers told reporters Monday.

Sentencing for Tazhayakov, who could get up to 25 years in prison, is scheduled for October. The verdict came in the first trial related to the April 15, 2013, bombings that killed three people and injured more than 200 others.

Tazhayakov’s mother wept loudly in court when the verdict was read. Tazhayakov spoke briefly to his parents before he was escorted out of the courtroom.

Prosecutors accused Tazhayakov and his roommate, fellow Kazakh national Dias Kadyrbayev, of trying to protect Tsarnaev three days after the bombings by removing a backpack and a laptop from Tsarnaev’s dorm room at the University of Massachusetts Dartmouth, which Tazhayakov also attended.

Prosecutors alleged that Kadyrbayev and Tazhayakov took the laptop to their apartment, and that Kadyrbayev, with Tazhayakov’s knowledge, tossed the backpack in a trash bin. Authorities eventually found the backpack — containing Vaseline, a thumb drive and fireworks — in a landfill.

Kadyrbayev is awaiting trial on the same charges and has pleaded not guilty. Another friend, Robel Phillipos, pleaded not guilty to making false statements. None of Tsarnaev’s friends is accused in the bomb plot itself.

Prosecutors told jurors Tazhayakov knew the identity of the suspected bombers — Tsarnaev and his older brother Tamerlan Tsarnaev — before the public found out, allegedly texting Kadyrbayev, "i think they got his brother," hours before the public knew their names or their relationship to one another.

The friends recognized the Tsarnaev brothers after authorities released video and still photos asking for the public’s helping finding the two men, prosecutors said.

Kadyrbayev told his friends that he believed Dzhokhar Tsarnaev "used the Vaseline ‘to make bombs,’ or words to that effect," an indictment against him reads.

The government said Tsarnaev texted Kadyrbayev after the bombings and told him he could go to his dorm room and take what he wanted. Kadyrbayev showed that text to Tazhayakov, the government alleged.

Authorities alleged that the friends picked up the backpack and the laptop from Tsarnaev’s dorm room on April 18, 2013, shortly before Tsarnaev was taken into custody.

The FBI interviewed the friends as part of the bombing investigation, and lawyers for Tazhayakov said he did everything he could to help the probe when he spoke with investigators. Based on that information, authorities found Tsarnaev’s backpack in the landfill, his attorneys said.

Daniel Antonino, one of the jurors in Tazhayakov’s case, said the panel found him guilty of obstruction because "the backpack was simply taken and discarded like they were getting rid of evidence."

"They just threw it in the trash, so that’s obstructing justice. Just taking it from the dorm room, we felt, was obstructing justice," Antonino said.

Antonino said the jury didn’t feel the same way about the laptop, because "they didn’t destroy it," and because jurors felt the friends saw the laptop as something they should take for its potential monetary value. Antonino cited Tsarnaev’s alleged text to Kadyrbayev, inviting him to take what he wanted.

Myers, Tazhayakov’s attorney, said his client was being unfairly punished for what Kadyrbayev is alleged to have done. The only thing Tazhayakov took from Tsarnaev’s room, Myers said, was a pair of headphones that rightfully belonged to him.

"I understand we’ve spoken about pronouns in this case: ‘They did this, they did that.’ (But) my client did not leave that dorm room with a backpack," Myers said. "He can only control what people do to a certain extent. … ‘They’ did not do anything.

"Dias Kadyrbayev went and took that backpack to a Dumpster. My client wasn’t part of that. How a jury claims that my client had intent to do that with Dias, I guess, is a misconstruing of the plain evidence."

Myers said his team also would object to the court’s verdict questionnaire, which asked for both charges whether Tazhayakov should be found guilty because of the backpack, the laptop or both. Myers said the jurors might have thought that saying no to the laptop was significant — perhaps thinking they were giving Tazhayakov a break — when in fact it did no such thing.

"We knew that could be misleading to the jury," Myers said.

Dzhokhar Tsarnaev awaits trial, having pleaded not guilty to 30 federal charges tied to the bombing and the subsequent pursuit of him and his brother, Tamerlan.

Tamerlan Tsarnaev died in a shootout with police days after the bombing.


Marathon suspect’s lawyers want hearing on leaks
Dzhokhar Tsarnaev facing charges in fatal bombing
WCVB | Jul 25, 2014


BOSTON —Lawyers for Boston Marathon bombing suspect Dzhokhar Tsarnaev have renewed their request for a judge to hold a hearing on leaks to the news media.

Last month, a judge issued a stern warning to prosecutors about former or current members of their team speaking to the media after Tsarnaev’s lawyers objected to interviews retired FBI agents gave around the anniversary of the deadly 2013 bombings.

The request came after several news outlets this week reported that investigators believe a friend of Tsarnaev provided the gun authorities say was used by Tsarnaev and his brother in the fatal shooting of an MIT police officer several days after the bombings.

On Friday, Tsarnaev’s lawyers asked the judge to hold a hearing to determine what instructions were given to law enforcement about not talking to the media.

Moscow’s Ambassador to London Stresses Russia’s Interest in Litvinenko’s Case Probe

LONDON, July 24, 2014 (RIA Novosti) – Moscow is among the key stakeholders in finding out the truth about the death of Russia’s Alexander Litvinenko back in 2006, who was a former Federal Security Service officer, Russian Ambassador to London Alexander Yakovenko said Thursday.

"Russia is among the most interested parties in establishing the truth in this dark business. Simply because serious allegations against the Russian Federation have been made publicly. We have always asked the British authorities to provide evidence, which, as they claim, they have, accusing Russian citizens [of the involvement in Litvinenko’s death]. But these requests were rejected," the ambassador said at a press conference in London.

"The British government has refused to provide this evidence upon request of the Coroner conducting the inquiry. Now, as we understand, the evidence will be examined in private hearings, closed to the public, presumably for reasons of national security. We will never accept any decision based on evidence which had not been considered in a competitive open trial," the Ambassador said.

On Tuesday, UK Home Secretary Theresa May agreed to a public inquiry on the Litvinenko case after a number of refusals to do so, arguing that the existing enquiry connected with the case of Litvinenko’s death is sufficient. The first hearing of the respective proceedings will be held on July 31. At the same time the investigation dropped alleged charges against the British side for failing to prevent Litvinenko’s death.

It is an issue that could not be investigated during the inquest into Litvinenko’s death earlier, as the inquest did not allow considering certain sensitive material on the case.

Litvinenko’s widow Marina Litvinenko won a High Court ruling that May should reconsider her decision not to allow a public inquiry. Coroner Sir Robert Owen, who was conducting the inquest into Litvinenko’s death, proposed a public inquiry as a more appropriate measure instead of an inquest, since it would allow the consideration of sensitive material in private.

With public inquiry approved by the UK government, this material, potentially relating to the alleged role of Russia in Litvinenko’s death, can be used in the investigation.

Litvinenko died on November 23, 2006 of poisoning by radioactive polonium-210 in London. His health began to deteriorate after he met up with former colleagues Andrei Lugovoi and Dmitry Kovtun for a cup of tea in London’s Millennium hotel.



Lugovoi rules out participating in public probe of Litvinenko’s death in London, calls it "cynicism"

MOSCOW. July 22, 2014 (Interfax) – The decision of the British authorities to resume investigating circumstances of Alexander Litvinenko’s death is cynical and politically motivated, Russian State Duma Deputy Andrei Lugovoi told Interfax on Tuesday.

"Cynicism, deception, and treachery. This is the only way I can comment on the actions of the British establishment and the decision to hold a public investigation of Litvinenko’s death," Lugovoi said. This piece of new is perplexing, he said.

Circumstances of Alexander Litvinenko’s death, who passed away in London in November 2006 of polonium poisoning, will be investigated publicly, British Secretary of State for the Home Department Theresa May said earlier.

"This year it will be eight years since Litvinenko’s death. And every time the British pull the Litvinenko case out, right when ‘political viability’ becomes an option. Now, due to the situation existing in southeastern Ukraine, the West enhanced pressure on Russia and personally on President Vladimir Putin," Lugovoi said.

Lugovoi said he did not consider the possibility to participate in the public investigation in London in any way.

Former agent of the Russian Federal Security Service, FSB, Alexander Litvinenko, who fled to the UK in 2000, died in November 2006. The radioactive element Polonium 210 was found in his body later.

Duma Deputy Andrei Lugovoi is considered to be the main suspect in the case by the UK. Lugovoi insists he is innocent. In April 2012 Lugovoi took polygraph test conducted by the British experts, which showed he was innocent.

The Impact of Terrorism Fears

Antonius, Daniel, PhD; Sinclair, Samuel Justin, PhD
Security 50.11 (Nov 2013): 128, 130-132

Terrorism has emerged in the last decade as one of the most critical issues with which governments must contend, topping most Western nations’ agendas in terms of resource allocation. For example, some reports indicate the United States has spent more than one trillion dollars waging the "War on Terror" – money and resources that may have been allocated very differently in the absence of such threats. Earlier this y ear, the United States was once again reminded of how vulnerable the public is to terrorism when radicalized brothers, Tamerlan and Dzhokhar Tsarnaev, after months of planning, allegedly carried out a bombing attack during the Boston Marathon, killing three people and injuring hundreds. More recently, in August of this year, the Obama administration responded to intercepted al Qaeda messages of credible terrorist threats in the Middle East and North Africa by closing 22 U.S. diplomatic facilities in Muslim nations and issuing a global travel alert. These and other events very often receive extensive media coverage, which subsequently influence people s worldviews and the manner in which they adapt to their immediate environment.

Behavioral scientists have been studying the psychological effects of terrorism intensely since the attacks of September 11, 2001, and good data is emerging as to the immediate and longer-term effects of terrorism. However, in many of these cases, psychological impact is measured following a discrete attack or event, which is then thought to elicit a specific type of reaction (e.g., anxiety or fear). While this body of research provides a frame work for beginning to understand the impact of discrete terrorist attacks, questions remain as to the psychological impact of ongoing threat. Following on this, and given how terrorism has altered the political and social landscape in the United States and globally, some have also begun to question ho w psychological reactions (such as fear and anger) impact political process, and whether such reactions can be manipulated for political purpose.

The fact is that fear, or the anticipation of future terrorism, is a primary psychological weapon underlying acts of terrorism. This anticipatory fear, or worry, can itself have serious effects on a variety of domains including political beliefs and support for certain governmental policies, making decisions about where to live and work, whether to travel into certain environments for any reason and how people generally engage in activities of daily living. However, there is also considerable heterogenic in people s emotional responses to terrorism and terrorism threats. Understanding the impact of these emotional processes on individuals and societies can be crucially important in evaluating terrorism threats and determine who warnings should be constructed and disseminated.

We recently explored the psychological effects of terrorism and ongoing threat, as well as the impact these dynamics have on political process in two recently published volumes. The Psychology of Terrorism Fears (Oxford University Press, 2012) seeks to complement existing research focusing on the psychological effects of terrorist attacks by also examining ho w people are affected by ongoing threat, and works to present a more integrated model for understanding how terrorism affects people. I n the follow-up volume, The Political Psychology of Terrorism Fears (Oxford University Press, 2013), we, in collaboration with an international cast of scholars, focused on ho w terrorism fears and threat alerts can influence political engagement and trust in government policy making, as well as the governments ability to generate public support for its policies.

How Do Threats of Terrorism Influence Individuals?

Much has been written about the psychological impact of terrorist attacks. This work has focused on the immediate increase in psychiatric symptoms and disorders and the relatively quick normalization of psychopathology in the months and y ears following an attack. However, these trends, in many ways, may mask the underlying sense of fear and worry that many people have about terrorism threats, or future terrorist attacks. In fact, people may continue to fear terrorism in meaningful ways long after a terrorist attack or threat has passed. This lingering fear varies across rime and con- text, affecting people in both negative and positive ways. Although these symptoms would not necessarily r each the level of a psychiatric disorder and require treatment, they may still significantly influence daily activities such as decisions about employment, who to socialize with, use of public transportation such as buses and trains, congregating in public and crowded places and traveling on an airplane.

That, of course, does not mean that terrorism threats have the same effect on everyone. Most people arguably respond to threats of future terrorism in a rational and constructive manner. Very compelling research has also shown that whether an individuals response is primarily fear versus anger (and it should be noted that emotion- al responses may fluctuate within individuals) may have a significant impact on their behavior. In the context of anger, people tend to exhibit gr eater levels of optimism and preference for confrontation, whereas with fear comes greater pessimism and preference for using conciliator y measures to de-escalate conflict. Moreover, research has highlighted the paradox of how terrorism fears can negatively affect some people and societies, while at the same time serving as the central force in strengthening resilience and fostering post-traumatic growth.

How Do Threats of Terrorism Influence Political Processes and Entire Societies?

The idea that exposure to terrorism threat (and the psychological reactions that ensue) affect political engagement, trust in government policy making and the governments ability to generate public support for its policies, is not new. The evidence indicates that people place larger degrees of trust in their government s ability to keep them safe from future violence folio wing large-scale terrorist attacks. For example, research has noted that the publics trust in the U.S. government increased markedly following the 9/11 attacks, but that over time these trends have declined.

Some scholars have argued that when high levels of emotionality are activated through threat alerts it impacts how people engage their respective political systems, from the politicians they elect to office to the policies they support. However, as we mentioned, not all emotions are created equal. For example, various studies have demonstrated the differential effects of fear and anger on people s trust in their government and support for different security policies. Fear, in comparison to anger, has been associated with a greater degree of perceived risk, as w ell as preferences for more pre- cautionary, conciliatory measures to reduce external threat. The feeling of anger may actually lead people to experience a gr eater locus of control over their environment, whereas fear is associated with less perceived control.

The emotional dynamics are complex and influenced by other factors such as political ideology. For example, research suggests that individuals identifying as Democrats may, when primed with high levels of emotion in the context of terrorism, be more skeptical towards policies aimed at improving a sense of security when Republicans are perceived as the ones making these policies. Cultural differences in support for govern- mental policies also emerge as a function of emotional reactions. Societies where there are high levels of baseline trust in the government may respond differently to terror threats than societies where there is a lower level of trust. The case of the Norwegian terror event of July 22, 2011 (when Anders Behring Breivik, fueled by his far-right militant ideology and Islamophobia, killed 77 people in a premeditated terror attack) is particularly interesting in this context. Researchers from Norway found that increased support for the government in the aftermath of this terror attack did not arise as a result of public fear, instead high levels of existing institutional trust may have buffered against the negative effects of fear and the elevated terror threat level. In so-called "high-trust" societies, this trust, instead of fear, may then have very different effects than what is common in "low-trust" societies, creating higher levels of national togetherness and strengthened interpersonal and institutional trust. Ongoing threats and fear also have the potential to shape culture over time, such as in the protracted conflict situations in Northern Ireland and the Israel. In these societies, fear is arguably a powerful motivating force for pursuing a perceived sense of safety and security.

A New Strategy

Using fear, and the emotions of the public, as a means of achieving some political goal is not a new strategy. During the Cold War era, for example, Sen. Joseph McCarthy used the threat of communism to inject fear into the political process, manipulating public and political views of who could be trusted and who w ere secret communist spies. More recently, Presidents George W. Bushs famous declaration of the War on Terrorism helped remind the public of the threat of terrorism, which was central to the success of his re-election campaign. Now, more than a decade past the 9/11 attacks, our society has changed in dramatic ways. These changes may reflect a sense of insecurity r elated to the potential threats we face, and have contributed to the development of a massive and integrated security infrastructure to keep us "safe" from these ambiguous threats. Some have even argued that we have become a "securitized" culture, with other national priorities taking a backseat to complex and ever-expanding national security priorities.

What is important, however, is to understand the complex array of factors that have contributed to this evolution. Increasing our awareness and understanding of the significant impact emotional processes have on individuals and societies in decision making can be crucially important in determining how terrorism threats and warnings are constructed and disseminated. Fostering a sense of control and agency within these messages, by presenting basic steps for preparedness and focus, may help decrease unwarranted fear.

For more information, please see Sinclair, S. J. & Antonius, D. (E ds.) (2013). The Political Psychology of Terrorism Fears. New York: Oxford University Press and S inclair, S. J. & Antonius, D. (2012). The Psychology of Terrorism Fears. New York: Oxford University Press. SECURITY


Ongoing threats and fear also have the potential to shape culture over time. In these societies, fear is arguably a powerful motivating force for pursuing a perceived sense of safety and security, jj

H Fostering a sense of control and agency within terrorism threats and warnings, by presenting basic steps for preparedness and focus, may help decrease unwarranted fear.

Author Affiliation

Daniel Antonius, Ph.D., is Assistant Professor and Director of Forensic Research in the Department of Psychiatry at University at Buffalo School of Medicine and Biomedical Sciences.

Samuel Justin Sinclair, Ph.D., is an Assistant Professor of Psychology at Haivard Medical School, and The Director of Research at the Psychological Evaluation and Research Laboratory (The PEaRL) at the Massachusetts General Hospital.

Boston Marathon a case study in lessons learned following last year’s bombing tragedy

Lasky, Steve | Apr 21, 2014

Tighter security and attention to intelligence gathering strengthens prepardedness for storied event

Boston Marathon a case study in lessons learned following last year’s bombing tragedy

Things were different at the Boston Marathon this year. Meb Keflezighi became the first American man to win the Boston Marathon since 1983 and the second oldest runner to ever take the crown. And unlike past races where it was a virtually open venue for both spectators and participants alike, strict physical security measures and a robust police presence made for long security lines, barricaded race routes, random searches, bans on backpacks and a zero tolerance for rogue runners who used to be part of the Marathon’s charm – remember Rosie Ruiz? The Marathon also accommodated more than 9,000 additional runners who failed to cross the finish line in 2013 because of the horrific terrorist bombing at the finish line of last April’s Marathon.

This year’s race also figured to be a lot different for Bonnie Michelman, the Director of Police, Security and Outside Services at Mass General Hospital. The devastating attack put Michelman and her entire facility on the frontline in 2013, as Mass General was the designated primary hospital for the race. Her facility wound up treating close to 300 casualties as a result of the bomb attacks.

"The preparations for last year’s event were prudent and appropriate for both the city and my facility. No one could have ever anticipated the unforeseeable nature and horror of this event. You can never plan for every contingency, for every event, and this was by far a startling example of that," said Michelman, who pointed out that the situation was made even more difficult due to the longitudinal nature of the event.

"This was an extremely disruptive disaster for many organizations, including mine. It wasn’t a four or five hour disaster – it was a multi-day disaster. We went into to Tuesday still gathering evidence, looking for the suspects, trying to reunite families, trying to identify comatose patients; and then on Thursday we had to ramp up for a Presidential visit," she continued. "So we had a huge emergency preparedness response to those dignitary visits. Then Friday, we had an unprecedented city lockdown that created all sorts of issues for the entire Commonwealth of Massachusetts. I have 8,500 employees here at Mass General that takes public transportation to work, which was completely shut down."

Michelman has been a long-time key player in the region’s disaster preparedness efforts. The city of Boston regularly conducts disaster and emergency preparedness exercises throughout the year, with a major training event each May. There are also numerous table-top exercises conducted among the public-private partners, MEMA and FEMA.

"The endless drills and preparedness training took what was an extremely bad event to a level that was manageable in many aspects. The fact that we had 281 people who were severely injured and they all survived, showcased the fact that this city was extremely well prepared," Michelman added. "The response and result was a tribute to all involved – from police and fire to our EMS and medical teams that were at the race, plus the Boston Athletic Association that coordinated the race, down to our hospitals. Everyone was unbelievable in the level of response."

Michelman’s comments certainly seem to reflect the report released last week by the Department of Homeland Security titled "Boston One Year Later: DHS’s Lessons Learned," detailing three topics which were a focus of attention in the aftermath of the Boston Marathon bombing. The report discussed the "importance of partnerships," the "need for effective and reliable communications," and the need to further boost anti-radicalization efforts.

Massachusetts has been the recipient of more than $1 billion from 22 DHS grant programs since 2002, including $370 million for the Boston urban area. DHS grants issued to local law enforcement helped prepare for a quick response to the bombing and identification of the suspects. According to the report, "DHS grants, training and workshops as well as drills and exercises throughout the Northeast region, and specifically in Boston and the Commonwealth of Massachusetts, built preparedness capabilities to enhance responses to complex, catastrophic attacks. Participants credited these investments for the coordinated and effective response to the bombings by law enforcement, medical, and other public safety personnel."

Learning from past mistakes and creating workable solutions has been a couple of the key elements Chuck Brooks thinks sets Boston and the surrounding area apart when it comes to assessing its emergency management needs and implementing strategic plans that work. Brooks, Vice President, Client Executive for DHS at Xerox said the most significant development has been the federal, sate, and local first responder communities recognizing past shortfalls in national emergencies and closely examining successes and failures from Boston, especially in the areas of planning, coordination and inter-operable communications.

"One outcome of reviewing the incident discovered that the pre-positioning of medical first responders for the marathon greatly helped in the triage efforts for victims on the scene. In the past as a matter of EMS (emergency medical services) protocols, medical first responders waited for law enforcement to clear arrival before they responded. The pre-staged medical services on the scene may become more standardized for security planning at future public events," said Brooks.

He added that another big development has seen federal, state and local communities have become even more engaged in learning how to improve working in "relationship preparedness" to be able to better respond and be more resilient in a future emergency. Brooks also cited the just released report commissioned by then DHS Secretary Janet Napolitano, noting "that funds were used to "equip and train tactical and specialized response teams specifically in in IED (improvised Explosive Device) detection, prevention, response, and recovery, including SWAT teams and Explosive Ordinance Disposal canine detection teams among other law enforcement units."

Knowing how to scramble through the federal funding maze and asking the right questions is a crucial aspect of properly ramping up emergency preparedness planning. Brooks stressed that DHS, and particularly FEMA, have been active in promoting the availability for training.

"From the defense draw-down overseas, a great deal of equipment is being made available to state and local public safety professionals. In most states the governor operates a homeland security committee to evaluate and prioritize needs in various state municipalities. There is a lot of paperwork involved in grant making applications, but and DHS officials are accessible and willing to help," Brooks pointed out. "My recommendation for state and local officials is to also look to private firms that specialize in securing grants under the Urban Areas Security Initiative (UASI), and DHS’s National Protection and Programs Directorates’ Federal Protective Service (FPS). Each program has their own requirements, processes and timing."

While most experts praised the preparation and the actions of Boston’s first responders and healthcare facilities in the aftermath of last year’s Marathon bombings, the most glaring weakness proved to be the lack of shared intelligence. Reports from ABC News immediately after the bombing said U.S. Customs and Border Protection’s (CBP) National Targeting Center "re-vetted" all flights that departed earlier in the day from Boston, New York, and Newark airports to identify potential suspects.

When a review of DHS’s "name-matching capabilities" was completed, it discovered a misspelling of "Tamerlan Tsarnaev," the older suspect of the two accused Boston bombers. This mistake apparently allowed him to return unnoticed to the United States after a trip to Russia, despite previous alerts from Russian intelligence. DHS has now improved its ability to detect variations of names derived from a wide range of languages.

It was also reported that Boston Police Chief Ed Davis said he was not notified about Tsarnaev before the attacks despite previous FBI investigations of Tsarnaev, but now DHS has improved its system of sharing information with local officials about potential threats.

"Intelligence sharing has been also highlighted as an area of focus for improvement. There was a revelation that law enforcement had been warned about the threat of religious extremist Tamerlan Tsarnaev and should have been alerted. The problem is that it is difficult and involves many resources to track and continually monitor every potential threat, especially that of the Lone Wolf," said Brooks. "We are a nation of soft targets and openness. New technologies such as data analytics, license plate reading, and facial recognition cameras can be employed for intelligence and forensic purposes but there is always an issue to consider regarding the balance of security with freedom and privacy."

Perhaps no one is more seasoned at understanding the challenges of large venue special events than William Rathburn, who served as the Los Angeles Police Department’s Planning Coordinator for the 1984 Olympic Games when he was LAPD’s Deputy Chief; then as the Director of Security for the Atlanta Olympic Games in 1996 – which at the time was the largest Olympic security undertaking in the Games’ history, with budget of $100 million and staff of 17,224 security personnel. Rathburn also was Chief of Police for the City of Dallas, the seventh largest police department in the nation.

Rathburn admitted that protecting Olympic venues may have been a bit easier than open events like a Marathon for the simple reason that defined security perimeters could be established and protected. Putting in a secured screening process and vetting the credentials of everyone associated with an Olympics provided safeguards his colleagues in Boston did not enjoy.

That being said, Rathburn firmly believed that a breakdown in the intelligence gathering process contributed greatly to the Boston tragedy.

"Intelligence is the one thing that is important in any event. Intelligence is the key element in your pre-planning and during the event. It takes on even more importance in an open venue event like the Marathon. It is impossible to provide security for a 26-mile course. If you harden portions of it – the most vulnerable areas — you can either discourage them or move them further out. That magnifies the importance of solid intelligence," said Rathburn.

Rathburn added that protocols have changed over the years with a greater focus on inter-agency communication than ever before. "I grew up in a professional environment where you had an inter-agency coordination center during a major event and that was a first responder’s main point of contact between agencies. We didn’t really see a need for direct communication from officer to officer unless it was task force operation or something similar.

"I think, to some extent, when you try to provide everyone immediate communication, it can lead to a slowdown in the communication process because so many people are trying to communicate. Unfortunately, that may have happened during the Boston bombing incident. Having immediate communication is a great thing until you overload the system or fail to have a designate point of contact," Rathburn surmised. "In my opinion it was not the fact the backpacks were allowed into the Marathon venue that caused the bombing. It was a failure to assess credible information that potential threats were imminent."

Despite all the planning and cooperative partnerships among agencies in the Boston area, even Michelman admitted the process could have been refined when it came to intelligence and communications in previous year. She said everyone learned a painful lesson.

"From the perspective of public-private partnerships and synergies, we in Boston have been in a very different place compared to other cities around the country. We have worked very hard in making relationships between public, private and government agencies — and the intelligence gathering process — better. We learned a lot from the Democratic National Convention several years ago, when we set up a Multi-Agency Command Center (the MAC) that had representatives from every public agency, and also from large private organizations like mine," said Michelman.

"There has been a lot of talk resulting from last year’s horrific event surrounding command and control and unity of command. There is no secret that law enforcement said there was no one person in charge. And maybe that’s okay in some events because there just couldn’t be, but that didn’t lessen the scrutiny around that issue. We have all worked diligently to rectify any shortcomings in that area," she concluded.

Secured Cities Note:

Both Chuck Brooks and Bonnie Michaelman will be featured speakers at the 2014 Secured Cities Conference in Baltimore, November 4-6. For more information on the program and how to register, please go to

Big Data Surveillance: Introduction

Andrejevic, Mark; Gates, Kelly
Surveillance & Society 12.2 (2014): 185-196

One of the most highly publicized avatars of high-tech surveillance in the networked era is the drone, with its ever-expanding range and field of vision (and sensing more generally), combined with its ominous military capabilities. One of the less publicized facts about the deployment of surveillance and military drones is that in addition to weapons, cameras, and other sensors, they are equipped with a device called an "Air Handler" that can capture all available wireless data traffic in the area. As one of the rare news accounts about this device put it, when a drone goes out on a mission, "the NSA [National Security Agency] has put a device on it that is not actually under the control of the CIA or the military; it is just sucking up data for the NSA" (Goodman 2014). The drone then comes to represent a double-image of surveillance: both the familiar "legacy" version of targeted, purposeful spying and the emerging model of ubiquitous, opportunistic data capture. As one of the reporters interviewed about his research on the "Air Handler" put it, "the NSA just wants all the data. They want to suck it up on an industrial scale. So they’re just piggybacking on these targeted operations in an effort to just suck up data throughout the world" (Goodman 2014). This description of the NSA’s approach to data collection parallels the widely publicized comments of the CIA’s Chief Technology Officer about contemporary strategies of surveillance: "The value of any piece of information is only known when you can connect it with something else that arrives at a future point in time…Since you can’t connect dots you don’t have, it drives us into a mode of, we fundamentally try to collect everything and hang on to it forever" (Sledge 2013). For drones, the signal-saturated sky is a sea of electromagnetically encoded data that can be captured, processed, refined, and perhaps put to use.

The collect-everything approach to monitoring and intelligence-embodied in the Air Handler, PRISM (the CIA’s secret mass electronic surveillance and data mining initiative), and a litany of other programs both public and private-is our starting point for exploring the connection between surveillance and so- called "big data." If conventional definitions of surveillance emphasize its systematic and targeted character (the notion that there is a specific "object" of surveillance), both aspects undergo some significant modifications when the goal is, generally speaking, to capture as much data as possible about everything, all the time, and hold on to it forever. Moreover, the ambitious scope of such surveillance raises a host of important issues associated with the infrastructure for collecting and storing huge amounts of data as well as the techniques and technologies for putting it to use. Even if the underlying goal of capturing information for the pursuit of some form of advantage, leverage, or control remains constant (see, for example, the contributions of both Reigeluth and van Otterlo to this issue), conventional understandings of the operation of surveillance and its social consequences are being reconfigured by the "big data" paradigm.

Some definitions will help specify the character of the changes associated with big data-driven forms of surveillance. For our purposes, the notion of "big data" refers to both the unprecedented size of contemporary databases and the emerging techniques for making sense of them. This understanding of big data will have consequences for our reconfigured definition of data-mining enhanced surveillance. What is significant about the big data moment is not simply that it has become possible to store quantities of data that are impossible for any individual to comprehend (The Library of Alexandria did that, as does the night sky, and the human brain), but the fact that this data can be put to use in novel ways-for assessing disease distributions, tracking business trends, mapping crime patterns, analysing web traffic, and predicting everything from the weather to the behavior of the financial markets, to name but a few examples. (For more on the logic of prediction and pre-emption, see Thomas’s contribution to this issue.) Humans are fated to live in environments that contain more information than they can ever fully register and comprehend. The advent of big data marks the moment when new forms of sense-making can be applied to the accumulated data troves (and, correspondingly, the moment when these troves can be amassed, stored, and shared in forms that are amenable to such techniques). So we take the term big data to refer to a combination of size, storage medium, and analytic capability. To refer to big data is not simply to claim that databases contain more information than ever before (although they do), but also to consider the new uses to which that data is put-the novel forms of "actionable intelligence" that emerge from the analysis of ever-expanding data sets. The Library of Congress, for example, has been around for a while, but as its contents are digitized, algorithms can search for patterns and correlations that have been hitherto impossible to detect. If, in the past, there were practical limitations on the ability to track the simultaneous movements of tens or hundreds of thousands of people through a major city, for example (it would be prohibitively expensive to hire enough people to tail everyone and take notes), today the ability to discern useful but non-obvious patterns from the data depends on complex technical systems. Humans simply cannot do that kind of data analysis unassisted.

The formation of big data systems, understood in these terms, has direct consequences for associated forms of surveillance, which avail themselves of both the burgeoning databases and the techniques for making sense of them. Perhaps the most obvious of these is that big data surveillance necessarily relies on automated data analytics. The emerging, massively data-intensive paradigm of knowledge production relies more than ever on highly complex automated systems that operate beyond the reach of human analytical capacities. The CIA can only aspire to "collect everything" if it has at least the hope of putting to use the world redoubled in digital form-something the agency could not hope to do with its small army of human spies alone, or even with the computing capacity it possessed merely a decade or so ago. The reliance on automated data analytics-or data mining-has its own consequences derived from the fact that the goal of such processes is to unearth indiscernible and un-anticipatable patterns from the data (see, for example, Chakrabarti 2009). That is, the data analytic process and its results are systemically, structurally opaque. The legal theorist Tal Zarsky (2013) describes the decisions based on such automated data-mining processes as "non-interpretable" (and thus non-transparent) because of their inherent complexity: "A non-interpretable process might follow from a data-mining analysis which is not explainable in human language. Here, the software makes its selection decisions based upon multiple variables (even thousands)" (2013: 1519). In this regard, the advent of big data surveillance augurs an era in which determinations of risk and suspicion result from complex data interactions that are both un- anticipatable and inexplicable. The database can generate patterns that have predictive power but not necessarily explanatory power. According to this logic, we need not and cannot know how the correlations were derived, or what causal explanations might explain them; we must simply accept that the data science knows best.

Relatively early examples of data mining retained some connection to intuition and explanation. In the 1970s, for example, political operatives discovered that people who drove Mercurys (a model of car) were more likely to vote Republican. There might not be a clear explanation for this, even if it is, perhaps, an intuitable finding, in the sense that the car in both appearance and reputation carried with it a certain set of associations. We might not be particularly surprised, for example, to learn that, more recently, the music streaming service Pandora discovered that Bob Marley fans are more likely to vote Democratic than Republican (Dwoskin 2014). However, the real prize, from a data-mining perspective, is the generation of completely un-intuitable correlations that nonetheless have predictive (or sorting) power. Let’s imagine, for example, that data analysis indicated viewers of a certain age who bike to work and wear glasses are more likely to respond to a toothpaste ad that emphasizes the product’s cavity-fighting power than its brightness-inducing qualities. The lure here would be the attempt to find some kind of underlying connection between the stated attributes and the prediction. However, the real goal of data mining is to move beyond this lure in order to arrive at correlations generated by thousands of variables over domains of millions of data points in ways that are untranslatable into any intuitively available pattern.

Zizek, following Lacan, describes this kind of functional non-knowledge as the domain of the symbolic Real: "There is a symbolic Real, which is simply meaningless scientific formulae…you cannot understand quantum physics [for example], you cannot translate it into our horizon of meaning; it consists of formulae that simply function" (Zizek and Daly 2004: 97). We might say something similar of algorithmic correlations and predictions: they do not provide us with underlying, common sense explanations, but offer findings based on a level of complexity that makes them, in some cases, utterly inexplicable. The algorithms do not integrate the findings into our "horizon of meaning" (we may never really understand why a particular set of variables is more or less likely to yield a desired outcome); they simply function.

The very opacity of the data-mining process suggests that the potential uses of any data set cannot be defined in advance: it may become useful in conjunction with yet-to-be collected data, and it may illuminate activities or outcomes to which it seems entirely unrelated. That is, the specific justification for collecting the data may come only after the fact, thus demanding that all data be collected and stored for its future use-value-its correlational and predictive potential-even if there are no envisioned uses for it at present. Big data surveillance, in this regard, is structurally speculative: data that is seemingly entirely unrelated to a particular strategic objective may well yield the most useful unforeseen correlations.

There is a second sense in which data collection in the context of big data surveillance is speculative: that of attempting to amass an archive that can be searched and sorted retrospectively. The goal is to collect data about everyone, because one never knows who might end up doing something that needs to be excavated and reconstructed (for example, breaking a law). If the archive is complete, according to this logic, then no matter who the suspect is, a relevant data trail can be reconstructed. The data load generated by mobile phones is a case in point. Police have already used mobile phone data to catch thieves by placing them at the scene of the crime and then confirming that their movements coincided with a subsequent car chase (Perez and Gorman 2013). One of the fantasies related to the post-911 goal of "total information awareness" involves the generation of a complete archive that would supplement (or displace) the vagaries of reported actions and memories by externalizing them in the form of machine-readable databases. Hence the recent spate of proposed and actual data retention laws in Europe, Australia, and elsewhere: typically these laws require that telephony and internet providers store data for a minimum period of time (from six months to a year or more) so that they can be accessed by authorities if necessary. Such laws rely on the oligopolistic or monopolistic character of service provision, offloading the surveillance responsibility onto large private sector operators.

Data retention initiatives are often criticized for treating everyone as a potential suspect, but this is not quite right: the assumption is that the vast majority of those tracked will not be considered suspects, and even further (as we are likely to be reminded) that data retention can help exonerate the innocent. Thus, the corollary to the repeated (and questionable) refrain that we need not worry about new forms of data collection as long as we are not doing anything wrong is that the database can come to serve as our alibi. If we are falsely accused or merely the target of suspicion, we have recourse to the monitoring archive that holds records of our whereabouts, our activities, our interactions, and our communications. Alternatively, for those who run afoul of the law, the archive can be used to link them to the scene of a crime, to reconstruct their movements, to identify and eventually capture them, as was ostensibly demonstrated by the pursuit of the Boston Marathon bombing suspects, originally identified by searching through the archive of security camera video of the event.

However, the Boston Marathon bombing case in fact underscores some holes in the logics of big data that introduce important complications in how this conjuncture of technologies and practices is conceptualized and studied as a problem in Surveillance Studies. There is a case to be made that identifying and capturing the bombing suspects had more to do with human grunt work than automated, high-tech forms of surveillance, big-data or otherwise (see, for example, Sheth and Prasch 2013). Ultimately, the correct suspects were identified by a team of human investigators manually combing through surveillance video gathered on the ground from local businesses. The young men were then apprehended after they shot a campus police officer at MIT, carjacked someone who managed to escape, and then had a shootout with the police. (The surviving suspect, Dzhokhar Tsarnaev, was captured when a Boston resident went outside to smoke a cigarette and saw blood on the side of his boat.) More to the point, if predictive analytics were indeed so sophisticated, why was the Tsarnaev brothers’ plot not detected in advance? Given what we now know about the NSA’s massive data-gathering machine, how did these two prolific users of email, web browsers, cell phones and social media slip under the predictive radar?

The failure of predictive surveillance in Boston Marathon bombing case has been used (predictably) to provide justification for the need to gather more data and develop greater predictive-analytic capacities. But we might also detect here a sense in which the promises of big data and predictive analytics carry with them what William Bogard calls "the imaginary of surveillant control," with the emphasis on imaginary- "a fantastic dream of seeing everything capable of being seen, recording every fact capable of being recorded, and accomplishing these things whenever and wherever possible, prior to the event itself" (1996: 4-5). The fact that this visionary prophetic omniscience remains a "fantastic dream" does not mean that we should be unconcerned about the implications of the new totalizing practices of data collection and mining. Instead it means that we should be careful not to inadvertently allow our thinking to reinforce the flip side of the determinist logic that underpins big data boosterism.

Another important implication of big data-driven surveillance, as we are conceptualizing it here, concerns the importance of attending to the physical and logistical infrastructure that enables it. If the advent of big data is inseparable from the ability to assemble, store, and mine tremendous amounts of data, and if the processing of these data troves is necessarily automated, then infrastructure remains central to emerging forms of surveillance. In fact, the size of the infrastructural capacity is central to understanding what distinguishes something called "big data" from earlier forms of data collection, data-mining and database management. As engineers are keenly aware, increasing the scale of technical systems does not involve simply making them bigger in any straightforward sense; instead it often requires their complete reinvention. A similar implication follows for the effects of scaled-up systems: they do not simply affect more people or reach more territory, but instead can lead to radical social and material transformations. The mass production of standardized shipping containers, for example, required a massive build-out of infrastructure that in turn radically transformed the terms of global trade and the political-economic geography of the planet (Levinson 2008).

Infrastructures are challenging to describe and virtually impossible to study in their entirety, as Lisa Parks (2012) has noted. There is, on the one hand, a political economy of big data surveillance that explores the implications of ownership and control over the surveillant resources, including the platforms and the networks, the server farms, the algorithms and the cultivation and allocation of data-mining expertise. Such an approach also necessarily considers the relationship between political control and economic resources-the data processing capacity of multinational internet corporations like Google and Facebook, for example, or the ability of the state to access data troves accumulated by these commercial entities.

These companies already have the data collection infrastructure in place to transform communication systems into high-resolution surveillance systems on a global scale. If at one time the accumulation of the world’s stored data was the province of the academic sectors and the state (and their various libraries and databases), the accelerated monetization of information has contributed to a dramatically expanding role for the private sector. One recent roundup of the world’s largest databases includes three commercial companies in the top five (Anonyzious 2012).

There are also epistemological dimensions to the central role of infrastructure in big data surveillance. Access to the big data resources housed by large corporate entities structures both access to useful "knowledge" and the character of this knowledge itself: not necessarily comprehended content, but excavated patterns. As Google likes to put it: no human reads your email-rather, machines turn it into metadata so as to correlate patterns of communication with patterns of advertising exposure and subsequent purchasing behavior. As Jeremy Packer (2013) argues, the form of knowledge on offer is tied to the infrastructure that generates it. The excavation of purely correlational findings that nonetheless have pragmatic value relies on access to databases and sense-making infrastructures. Packer uses the example of Google (as does the CIA’s Gus Hunt, in describing the inspiration of his agency’s data-mining practices), whose "computations are not content-oriented in the manner that advertising agencies or critical scholars are. Rather, the effect is the content. The only thing that matters are effects: did someone initiate financial data flows, spend time, consume, click, or conform? Further, the only measurable quantity is digital data" (2013: 297). For Packer, a critique of this shift to effect-as-content relies upon an engagement at the level of infrastructure: "Understanding media not merely as transmitters-the old ‘mass media’ function-but rather as data collectors, storage houses, and processing centers, reorients critical attention toward the epistemological power of media" (2013: 296).

The shift from content to metadata has implications for the convergent character of data mining: even though marketing and national security have received the lion’s share of media attention, data analytics play an increasingly important role in a growing number of spheres of social practice, from policing to financial speculation, transport and logistics, health care, employment, consumption, political participation, and education. Data collected by a particular application can often be repurposed for a variety of uses. The music streaming service Pandora, for example, gathers data about user preferences in order to provide customized listening recommendations, but also correlates listening habits with geography, and geography with voting patterns, in order to infer information about listeners’ political leanings. Another example is an application developed by Microsoft Research that apparently is able to predict users’ moods based on their patterns of smart phone use.

The point is that so-called "function creep" is not ancillary to the data collection process, it is built into it-the function is the creep. Continuous repurposing of information initially gathered for other purposes is greatly facilitated by digitization, which makes storage, sharing, and processing easier. But function creep is also made enabled by the new "save everything" logic of automated data analysis (Morozov 2013), where the relevance of any piece of data to future correlations and predictions can never be ruled out in advance.

All of these characteristics of the deployment of big data and associated forms of data mining have implications for how we think about the changing character of surveillance in data-driven contexts. (Klauser and Albrechtslund’s contribution to this special issue proposes a framework for exploring these changes.) To approach the developing character of big data surveillance, we might start by considering some recent influential definitions of surveillance. In their report on "The Surveillance Society" for the UK information commissioner, David Murakami Wood and colleagues define surveillance as, "purposeful, routine, systematic and focused attention paid to personal details, for the sake of control, entitlement, management, influence, or protection" (Murakami Wood et al. 2006: 4). They further emphasize that, "surveillance is also systematic; it is planned and carried out according to a schedule that is rational, not merely random" (Murakami Wood et al. 2006: 4). Similarly, in his influential formulation of "dataveillance" in the digital era, Roger Clarke refers to "the systematic monitoring of people or groups, by means of personal data systems, in order to regulate or govern their behaviour" (Clarke 1987). He subsequently distinguishes between targeted personal dataveillance and "mass dataveillance, which involves monitoring large groups of people" (Clarke 2003). A third example is Haggerty and Ericson’s (2000) concept of the "surveillant assemblage," often cited in recent work to theorize the shape and character of distributed, networked data monitoring on a mass scale (Cohen 2013; Klauser 2013; Murakami Wood 2013; Vukov and Sheller 2013; Salter 2013; etc.). The surveillant assemblage, to remind readers, "operates by abstracting human bodies from their territorial settings and separating them into a series of discrete flows. These flows are then reassembled into distinct ‘data doubles’ which can be scrutinized and targeted for intervention" (Haggerty and Ericson 2000: 606).

Such definitions remain productive for analysing many forms of contemporary surveillance, but they require some qualification in the context of big data. In particular, the speculative and totalizing aspects of big data collection transform the systematic and targeted character of surveillance practices. The notion that surveillance is systematic and targeted takes on a somewhat different dimension when the goal is to capture any and all available data. Drone data collection via the "Air Handler," for example, is more opportunistic than systematic (as was the capture of information from local WiFi networks by Google’s Street View cars). By the same token, the uses for such data are often more speculative than defined.

The very notion of a surveillance target takes on a somewhat different meaning when surveillance relies upon mining large-scale databases: the target becomes the hidden patterns in the data, rather than particular individuals or events. Data about the latter are the pieces of the puzzle that need to be collected and assembled in order for the pattern to emerge. In this regard, the target for data collection becomes the entire population and its environment: "all of reality" is, as Packer puts it, "now translatable. The world is being turned into digital data and thus transformable via digital manipulation" (2013: 298). This, of course, is the wager of big data surveillance: that those with access to the data have gained some power over the informated world, and that the patterns which emerge will give those with access to them an advantage of some kind. Big data surveillance, then, relies upon control over collection, storage, and processing infrastructures in order to accumulate and mine spectacularly large amounts of data for useful patterns. Big data surveillance is not about understanding the data, nor is it typically about explaining or understanding the world captured by that data-it is about intervening in that world based on patterns available only to those with access to the data and the processing power. Big data surveillance is not selective: it relies on scooping up as much information as possible and sorting out its usefulness later. In the big data world there is no functional distinction between targets and non-targets (suspects and non- suspects) when it comes to data collection: information about both groups is needed in order to excavate the salient differences between them.

Big data surveillance looks much less parsimonious than the panoptic model that has played such an important role in conceptualizing and critiquing the relationship between surveillance and power. Bentham’s model was meant to leverage uncertainty in the name of efficiency-the properly functioning panopticon allowed just one supervisor to impose discipline upon the entire prison population. Bentham speculated that once the system had been implemented it might continue to function even in the absence of a supervisor-with just an empty, opaque tower looming nearby to warn inmates that they might be monitored at any time. This is the logic of "dummy" speed cameras or surveillance cameras: that the spectacle of surveillance carries with it its own power. Compared to this alleged model of efficiency, the big data model looks somewhat extravagant: rather than mobilizing uncertainty (as to whether one is being watched or not), it mobilizes the promise of data surfeit: that the technology is emerging to track everything about everyone at all times-and to store this data in machine-readable form. Of course, the danger of such a model is that it must necessarily fall short of its goal; the only way to ensure that nothing is overlooked is to reproduce the world in its entirety in data-recorded form, and therefore to record data about the recording process itself, and so on, indefinitely. The result is that one of the hallmarks of big data surveillance is its structural incompleteness.

This incompleteness has its own consequences, given that the big data model attempts to move beyond the sampling procedures of other forms of partial data collection to encompass the entire population. Put somewhat differently, there is no guarantee that the data collected is either comprehensive or representative. The sheer size of the database is meant to compensate for these lacks, but does not prevent systematic forms of bias from working their way into the data trove (or the algorithms that sort it). Shoshana Magnet (2011) and Kelly Gates (2011), for example, have each demonstrated the ways in which debunked conceptions of racial identity work their way into biometric identification technologies. The database carries with it associations of objectivity that can "launder" the forms of bias that are baked into the data collection and sorting processes. Thus, a closer look at the labor that goes into shaping these processes is a crucial component of critical approaches to big data surveillance (see French’s contribution to this issue).

One rejoinder to such critiques, however, is that they rely upon an outmoded approach to the data: an over-emphasis on its content rather than its functional efficacy. This is Packer’s Kittler-inspired point: when the effect is the content, all other questions of referentiality (is the data representative, complete, etc.?) fall by the wayside. Presumably if there is a problem somewhere along the line, then the ability to attain the desired effect is impaired, but relative to what? There is no standard of truth, or even correctness in such a model (as is implied, for example, by the standard of "the population" in probability sampling). There is merely the bar set by existing forms of practice. If a particular recommendation algorithm achieves a higher rate of success in getting people to watch movies, purchase books, or click on links, then it has succeeded. Questions about the representativeness of the data or biases in the algorithm are subordinated to this measure of success. In this regard, it would be somewhat misleading to say that the top priority of big data surveillance is to obtain as accurate and complete a view of the world as possible. Rather its goal is to intervene in the world as effectively as possible, which may well entail lower standards of comprehensiveness and accuracy.

The complications that totalizing forms of data capture and analysis introduce for established ways of defining surveillance also suggest the need to re-examine existing legal frameworks. The idea that the institutions engaged in totalizing data capture are focused on distributions and patterns emerging in populations, and presumably uninterested in targeting particular individuals, allows them to circumvent established rights-based principles. In the United States, for example, the Fourth Amendment to the Constitution prohibits search and seizure of "persons, houses, papers, and effects" without a legally issued warrant that "particularly describ[es] the place to be searched, and the persons or things to be seized." This legal right is systematically subverted, not only by big data’s "no content, just metadata" assertion, as José van Dijck refers to it in this issue, but also by a whole parallel extra-legal domain of industry "self- regulation." The self-regulatory approach to privacy protection relies on so-called voluntary disclosure of personal data, written into the incomprehensible, small-type "privacy policies" that people agree to daily as a condition of participation in the online economy (Turow 2006). Such policies rely upon "an intrinsically asymmetrical relationship," as Sara Degli Esposti notes in her contribution to this issue. The point here is that the big data paradigm, in its applications that rely on data about human beings, is built on the foundation of what is, in reality, a regime of compulsory self-disclosure. And this regime is supported by and commensurate with a normalized and permanent "state of exception," in which individual legal rights are always suspended in any case, for all who might choose to opt out (because after all, opting out itself looks suspicious).

In short, if the totalizing and massively scaled-up data paradigm requires new ways of conceptualizing surveillance, it also requires renewed efforts at rescuing and reinventing the legal arguments and interventions that can be used to address and curb these practices. Legal scholar Julie Cohen (2013) offers one such effort at rethinking privacy in light of big data in her recent Harvard Law Review piece, "What privacy is for"; among her recommendations is a reemphasis on the legal and moral necessity of due process. From another angle, Jay Stanley (2013) of the ACLU offers a short but persuasive argument against the effectiveness of applying big data to predict terrorist attacks, using an analogy from physics called Brownian motion: a water molecule’s path through water is easy to understand in retrospect but impossible to predict in advance. If strategies for fighting terrorism continue to take the approach of amassing greater and greater quantities of data about everybody, Stanley explains, the outcome is indeed predictable: "many more incredulous senators, amazed that our security agencies failed to thwart attacks when all the signs seemed so ‘clear’ in advance." Of course, the challenge of reframing the policy debate returns anew with each terrible tragedy of this kind; terrorist attacks in particular seem uniquely suited to technological-solutionist responses.

Certainly another way of challenging the ethical and legal underpinnings of big-data surveillance is by pointing to the glaring contradiction in the demand on the part of state and corporate actors that their operations remain strictly confidential, even as those same operations require individuals to relinquish all rights to privacy and lay their lives bare. Among the many deafening warning alarms raised by Edward Snowden’s NSA revelations is the need to hold the state accountable to the same if not greater scrutiny than what it is now demanding of citizens en masse, starting with all claims that state agencies make to so- called state-secrets privileges. And an equal measure of scrutiny should be levelled at the trade-secrets claims of all non-state institutions with big-data capabilities.

Obviously, more systematic and targeted forms of surveillance have not dropped by the wayside, and definitions of surveillance that account for such practices remain relevant. We are not arguing that the character of surveillance has changed in all contexts. Rather we seek to identify salient aspects of emerging forms of data-driven surveillance and thereby to gesture toward their societal consequences. The shift toward totalizing data capture was more or less apparent well before The Guardian published its first NSA spying revelation in June 2013. But the steady stream of revelations trickling out from Snowden’s files gives us cause to consider how well our established theoretical and political approaches to surveillance measure up to the challenges of big data, real or imagined. Given its heavy reliance on infrastructure, big data surveillance is available only to those with access to the databases and the processing power: it is structurally asymmetrical. Likewise, the forms of knowledge it generates are necessarily opaque; they are not shareable understandings of the world, but actionable intelligence crafted to serve the imperatives and answer the questions of those who control the databases. These forms of knowledge push against any attempt to delimit either the collection of data or the purposes to which that data are turned.

Contributions to this issue

While the range of topics and issues addressed here is by no means exhaustive, the contributors to this special issue cover considerable ground in their analyses of big data and predictive analytics as issues of concern to Surveillance Studies. Some of the contributors offer speculative theoretical reflections, while others offer empirical case studies grounded in specific domains. They address a selection of key areas: pandemics and disease surveillance systems, public health surveillance, self-tracking and the "Quantitative Self" movement, urban digital infrastructures, the big business of big data, crime prediction programs, and more. The authors also elaborate on a number of critical concepts, some borrowed from other sources, that should prove analytically productive for further research: datafication, dataism, dataveillance, analytical competitors, algorithmic governmentality, and others. There are many other areas and topics that might have fit within the scope of this issue. Promising areas for further research include the role of big data surveillance in the finance industry and the analysis of the financial markets; the accelerated monetization of data at the heart of big data as a business model; the impact and role of big data in political polling and public opinion analysis; market research and sentiment analysis; policing, crime mapping and crime prediction; military applications of big data; and many more. It also is important to note that authors originally submitted work for consideration in this theme issue before the first Snowden NSA story broke in June 2013, with the reviewing and revising process extending through the initial revelation and the steady stream of exposures that followed over the latter half of 2013. Thus, while some of the authors incorporated the newly visible NSA activities into their articles during the revision process, none of the pieces published here are principally focused on PRISM or other NSA programs revealed by Edward Snowden’s files.

In her contribution, José van Dijck examines the logic of "datafication" that underpins the big data paradigm-a belief in the value- and truth-producing potential inherent in the mass collection and analysis of data and metadata (Mayer-Schoenberger and Cukier 2013). Among the problems van Dijck identifies with the datafication view is the assumption of a direct and self-evident relationship between data and people, as if the digital traces users leave behind online reveal their essential traits and tendencies, from their spiritual beliefs and intelligence levels to their earning potential and predisposition to diseases. van Dijck sees an ideology of dataism in the celebratory view of datafication espoused by big data proponents-the resurgence of a flawed faith in the objectivity of quantification, and a misguided view of metadata as the "raw material" of social life. The opacity of data science supports the ideology of dataism, obscuring the priorities that shape both the range of data collected and the ways that data gets analysed. van Dijck also explores the problem of trust at the heart of dataism, insisting that what is at stake is not just our trust in specific government agencies or corporations, but in the integrity of the entire networked ecosystem.

In her essay, "When Big Data Meets Dataveillance," Sara Degli Esposti draws substantially on business literature to identify some of the specific ways that business priorities shape the practices of big data analytics, as well as the kinds of answers that are generated. Information technology companies, she explains, are now differentiated based on their capacity for analyzing big data-"analytical competitors" are so named for their ability to outperform their peers in terms of their ability to apply analytics to their own data, drawing on internal expertise (Davenport and Harris 2007). Of course, not all companies have such high-skilled in-house labor, creating a market for "analytical deputies," or companies building their business models on conducting large-scale analyses of data for less data-savvy corporate customers. Degli Esposti employs Roger Clarke’s concept of dataveillance to parse out the surveillant aspects of big-data business practices along four dimensions: recorded observation, identification and tracking, analytical intervention, and behavioral manipulation. While these dimensions of dataveillance operate in a feedback loop, it is forms of analytical intervention in particular that enable businesses operating in different sectors to achieve key objectives, "from customer acquisition and retention to supply chain optimization; from scheduling or routing optimization to financial risk management; from product innovation to workforce recruitment and retention." Analytical intervention also allows companies to engage more effectively in profit-maximizing price discrimination strategies. Moreover, while businesses depend increasingly on data-mining agents-computer programs designed to automatically identify patterns in data-what is clear from Degli Esposti’s discussion is that human expertise remains central to the knowledge- and profit- producing potential of big data. Big data companies employ "tech savvy CEOs" as well as highly trained data scientists with the sophisticated statistical and mathematical skills necessary to create advanced analytical applications. Big data surveillance is a high-tech, high-skill operation.

Martin French’s contribution also zeroes in on the production processes that constitute big data systems, focusing on the activities of Canadian health care professionals. Specifically, he presents a case study that examines the process challenges associated with the implementation of a large-scale, regionally interconnected public health information system in Ontario, Canada. Countering the "informatic ethos" that conceals the labor that makes IT systems work, French focuses on what he calls "informatics practice," or the combined human and non-human labor activity that materializes information, the practical activities that contribute to "making information a material reality in quotidian life." Viewed from this on-the-ground perspective, the foundations of new forms of data-driven monitoring appear more subject to the vicissitudes of individual practices, convenience, and happenstance than suggested by monolithic portrayals of big data as a black-box category. The work of public health professionals is often at odds with IT system operations, for example, and such professionals are often very much concerned with patient privacy. In French’s case study, "the everyday operation of health IT not only complicated the work of public health, but also blurred public health’s surveillant gaze." French’s article shifts the lens from the impact of surveillance on those subjected to it, to its impact on the work practices of those tasked with operating and otherwise making the monitoring systems function.

In his contribution, Tyler Reigeluth conceptualizes big data practices as forms of "algorithmic governmentality," (Berns and Rouvroy, in Reigeluth, this issue), considering how these practices take shape as forms of subjectivation in the Foucauldian sense-interventions that bring human subjectivities into being. He expands on the notion of "digital traces," variously understood as the marks, prints, infinitesimal pieces and intersection points that are gathered together and analysed to make sense of individuals and collectives. Even in academic thought, Reigeluth notes, there seems to be some agreement that identities are "the collection or the sum of digital traces." Similar to van Dijck, he asks what is at stake in understanding digital traces as the raw material of human identities, and the basis for discovering fundamental truths about them. Rather than seeing big data analytics as a radical departure from existing forms of subjectivation, Reigeluth suggests that the concept of digital trace can shed light on the ways digital technology is continuous with long-standing institutional and technological arrangements for shaping human subjectivities by structuring the environments they inhabit. Doing so requires opening up the black box of data capture and analysis, following the traces as they enter into and exit algorithmic systems and their physical infrastructures. Focusing on two manifestations of the big data paradigm-a crime prevention program called PredPol (short for "Predictive Policing") and the Quantified Self movement-Reigeluth considers the ironies of algorithmic rationality. "If the ideal individual is perfectly correlated and immanent to his environment," he asks, and "if her singularity can be reduced to the degree to which she fulfills these correlations," then is it actually possible for an ethical and political subject to exist as such?

Martijn van Otterlo explores the ways in which the flexibility of digital environments can double as laboratory and virtual Skinner box, enabling an ongoing process of experimentation in social control. He explores the ways in which digital environments are constantly modulated for the purposes of determining how best to influence the behavior of those whose actions can be captured by interactive forms of data collection. If, as Otterlo suggests, paranoia is a "step in the right direction" to the transformation of cyberspace into myriad cybernetic laboratories, this is a consequence of the capitalist imaginary at play in the fields of big data. He cites the example of a Microsoft patent for an automated system that monitors and analyses employee behavior in order to ensure "that desired behaviors are encouraged and undesired behaviors are discouraged." It turns out that, from the perspective of managers, marketers, and other associated authorities, futuristic digital dreams recapitulate familiar fantasies of control and manipulation with deep roots in the history of media technologies. Otterlo’s contribution helpfully opens up a range of social spheres to a consideration of the relationship between big data mining and experimentation beyond the familiar one of advertising. By drawing on examples from a range of recent appropriations of data- mining technology, he argues that we need to consider the implications of big data-driven forms of monitoring and surveillance in the realms of politics, education, policing, and the workforce, among others.

One of the attributes of "big data" that emerge from the contributions to this issue is the increasing reach of data mining, and the unfolding of new registers of monitoring and surveillance. Francisco Klauser and Anders Albrechtslund’s article proposes a framework for research on big data based on the four axes of "agency, temporality, spatiality and normativity." The virtue of such an approach is that it draws upon the wide-ranging uses of data-driven monitoring to broaden the reach of the study of surveillance. The article invokes the seemingly disparate practices of self-monitoring on the one hand, and urban surveillance associated with "smart cities" on the other, to trace commonalities in data-mining techniques and their relation to forms of social control. The authors argue for approaches to data-driven forms of monitoring that supplement critiques of discipline at the individual level with those of regulation at the population level. In other words, when decisions are made at the aggregate level, drawing on probability levels generated by data mining, the focus is not on particular individuals but on aggregate outcomes. The authors also argue for expanding the reach of Surveillance Studies beyond the monitoring of humans to consider the wide array of objects and contexts about which information is collected. The fantasy of "big data" is that it might become powerful enough to create a comprehensive data double of both the social world and the object world (and their interactions).

Lindsay Thomas’s exploration of the logic of disease monitoring systems designed to track and anticipate the spread of contagious illnesses doubles as a meditation on the temporality of pre-emption more generally. Disease monitoring partakes of the logic of predictive analytics: the mobilization of data collected in the past to model possible futures in the present. As she puts it, "The future is anticipated and surveilled using past data," a formulation that could apply equally to financial modeling, crime prediction, climate modeling, and much more. The paradox of such forms of modeling, she notes, is not simply that, when it comes to catastrophes like pandemics, they attempt to forestall future events by interjecting them into the present, but that the very attempt to collapse temporality pushes in the direction of a catastrophic stasis: "their continual construction of soon-to-arrive pandemics normalizes catastrophe. They build ‘future’ catastrophes all around us, teaching us to accept them, and, by extension, the measures we all must take to prepare for them, as given."

Taken together, the contributions to the issue develop some of the emerging themes in explorations of big data. One is the tension between the promise of predictive control embodied in the big data paradigm, and the realities of biased, incomplete data. Closely related to this tension is the promotional hype associated with new forms of data collection and mining, the misplaced faith in big data that Morozov (2013) aptly refers to as "the folly of technological solutionism." The contributors also explore the epistemological claims associated with the forms of "knowledge" that can be extracted from sorting and analyzing increasingly enormous, merged datasets. Perhaps most importantly, they provide a starting point for studying the social, political, and cultural consequences of a burgeoning domain of automated surveillance. In this regard, big data is not simply a matter of the size of the database, but of the claims made on its behalf, and on its application to an ever-expanding range of social practices.


Anonyzious. 2012. 10 Largest Databases of the World., March 24, 2012. largest-databases-of-the-world/

Bogard, William. 1996. The Simulation of Surveillance: Hypercontrol in Telematic Societies. Cambridge: Cambridge University Press.

Chakrabarti, Soumen. 2009. Data-mining: Know it All. New York: Morgan Kaufmann.

Clarke, Roger. 1987. Information Technology and Dataveillance.

Clarke, Roger. 2003. Dataveillance – 15 Years On.

Cohen, Julie. 2013. What Privacy is for. Harvard Law Review 126: 1904-1933.

Davenport, Thomas H. and Jeanne G. Harris. 2007. Competing on Analytics: The New Science of Winning. Boston: Harvard Business School Press.

Dwoskin, Elizabeth. 2014. Pandora Thinks It Knows if You Are a Republican. The Wall Street Journal. February 13.

Gates, Kelly. 2011. Our Biometric Future: Facial Recognition Technology and the Culture of Surveillance. New York: New York University Press.

Goodman, Amy. 2014. Death by Metadata: Jeremy Scahill and Glenn Greenwald Reveal NSA Role in Assassinations Overseas. Democracy Now!

Haggerty, Kevin D. and Richard V. Ericson. 2000. The Surveillant Assemblage. British Journal of Sociology 51 (4): 605-622.

Klauser, Francisco. 2013. Political Geographies of Surveillance. Geoforum 49 (October 2013): 275-278.

Levinson, Marc. 2008. The Box: How the Shipping Container Made the World Smaller and the World Economy Bigger. Princeton: Princeton University Press.

Magnet, Shoshana. 2011. When Biometrics Fail: Gender, Race, and the Technology of Identity. Durham: Duke University Press.

Mayer-Schoenberger, Viktor and Kenneth Cukier. 2013. Big Data. A Revolution that Will Transform How We Live, Work, and Think. London: John Murray Publishers.

Morozov, Evgeny. 2013. To Save Everything, Click Here: The Folly of Technological Solutionism. New York: PublicAffairs.

Murakami Wood, David, Kirstie Ball, David Lyon, Clive Norris, and Charles Raab. 2006. A Report on the Surveillance Society. Report for the UK Information Commissioner’s Office. Surveillance Studies Network, UK. L_REPORT_2006.ashx

Murakami Wood, David. 2013. What is Global Surveillance? Towards a Relational Political Economy of the Global Surveillant Assemblage. Geoforum 49: 317-326.

Packer, Jeremy. 2013. Epistemology Not Ideology OR Why We Need New Germans. Communication and Critical/Cultural Studies 10(2-3): 295-300.

Parks, Lisa. 2012. Things You Can Kick: Conceptualizing Media Infrastructures. Paper presented at the annual meeting of the American Studies Association Annual Meeting, Puerto Rico Convention Center and the Caribe Hilton, San Juan, Puerto Rico, Nov. 15-18, 2012.

Perez, Evan and Siobhan Gorman. 2013. Phones Leave a Telltale Trail. The Wall Street Journal, June 15.

Salter, Mark B. 2013. To Make Move and Let Stop: Mobility and the Assemblage of Circulation. Mobilities 8 (1): 7-19.

Sheth, Falguni A. and Rober E. Prasch. 2013. In Boston, Our Bloated Surveillance State Didn’t Work., April 22,

Sledge, Matt. 2013. CIA’s Gus Hunt On Big Data: We ‘Try To Collect Everything And Hang On To It Forever.’ Huffington Post, March 20.

Stanley, Jay. 2013. The Asymmetry Between Past and Future, and Why It Means Mass Surveillance Won’t Work., May 13. means-mass

Turow, Joseph. 2006. Niche Envy: Marketing Discrimination in the Digital Age. Cambridge, MA: MIT Press.

Vukov, Tamara and Mimi Sheller 2013. Border Work: Surveillant Assemblages, Virtual Fences, and Tactical Counter-Media. Social Semiotics 23 (2): 225-241.

Zarsky, Tal. 2013. Transparent Predictions. University of Illinois Law Review 4: 1503-1570.

Zizek, Slavoj and Glynn Daly. 2004. Conversations with Zizek. Cambridge: Polity.

Author Affiliation

Mark Andrejevic
University of Queensland, Australia.


Kelly Gates
University of California, San Diego, US