Dec 1, 2016

German spy chief says Russian hackers could disrupt elections Click and elect: how fake news helped Donald Trump win a real election | Hannah Jane Parkinson Cozy Bear and Fancy Bear: did Russians hack Democratic party and if so, why?

The head of Germany’s foreign intelligence service has warned that next year’s general election could be targeted by Russian hackers intent on spreading misinformation and undermining the democratic process.

Bruno Kahl, president of the Bundesnachrichtendienst, said Russia may __have been behind attempts during the US presidential campaign to interfere with the vote.

“We __have evidence that cyber-attacks are taking place that have no purpose other than to elicit political uncertainty,” he told the Süddeutsche Zeitung in his first interview since he was appointed five months ago.

“The perpetrators are interested in delegitimising the democratic process as such, regardless of who that ends up helping. We have indications that [the attacks] come from the Russian region.

“Being able to attribute it to a state agent is technically difficult but there is some evidence that this is at least tolerated or desired by the state.”

Kahl said the suspicion was that a large proportion of attacks were being carried out simply to demonstrate technical prowess. “The traces that are left behind in the internet create an impression of someone wanting to demonstrate what they are capable of,” he said.

Kahl joins a range of leading voices in Germany who have recently expressed their concerns over Russian interference, particularly through the spread of fake news stories.

Hans-Georg Maaßen, president of the domestic BfV intelligence agency, said in an interview that cyberspace had become “a place of hybrid warfare” in which Russia was a key player. “More recently, we see the willingness of Russian intelligence to carry out sabotage,” he said.

Maaßen said Russian secret services had been carrying out attacks on computer systems in Germany which, as far as his agency had been able to ascertain, were “aimed at comprehensive strategic data gathering”.

Only when people were confronted with the fact the information they were receiving was untrue would “the toxic lies lose their effectiveness”, he said.

Hackers were said to have been behind attacks on Deutsche Telekom on Sunday and Monday that disabled internet and phone access for almost a million customers in Germany. The company said the security breach was part of a worldwide attack on routers. Security experts said the hackers may have been Russian but they had no proof.

The German chancellor, Angela Merkel, said on Tuesday she did not know who was responsible for the strike but “such cyber-attacks, or hybrid conflicts as they are known in Russian doctrine, are now part of daily life and we must learn to cope with them.

“We have to inform people, and express our political convictions clearly,” she said, calling on the population to not allow themselves to be irritated by such rogue operations. “You just have to know that there’s such a thing and learn to live with it,” she said.

Arne Schönbohm, president of the Federal Office for Information Security and known as Germany’s ‘“cyber sheriff”, called the Deutsche Telekom attacks worrying: “It shows to what extent cyber-attacks can affect every citizen. We need to get used to the idea that in future computer attacks, both comparable and far worse, will increasingly take place.”

In 2015, an attack on internet in the German parliament was blamed on Russian hackers by German intelligence. Russian officials have strenuously denied the accusations.

Germany faces a heated election campaign next year, largely due to the pressure Merkel is under over her liberal refugee policy, along with the rise of rightwing populists Alternative für Deutschland (AfD), which is on track to enter the Bundestag for the first time.

Public disenchantment towards Merkel – under fire also for her critical stance towards Russia over its annexation of the Crimea – is ripe for exploitation by her political opponents, several of whom, including the AfD, have reached out to the Kremlin and vice versa.

Merkel has also warned that populists and social media platforms spreading propaganda were in danger of causing unprecedented damage to democracy.

Speaking to the Bundestag last week, she said: “Today we have fake sites, bots, trolls – things that regenerate themselves, reinforcing opinions with certain algorithms, and we must learn how to deal with them.”

A report published this month by the Atlantic Council on Russian Influence on France, Germany and the UK, pointed to an extensive Russian “disinformation campaign” being carried out in Germany, which it said had “opened opportunities for the Kremlin to influence German politics and the public debate”.

The Pegida anti-Islam movement has repeatedly hammered home the message at its rallies that the influence of President Vladimir Putin’s Russia in Germany is a welcome alternative to the imperial designs of the US and Brussels.

Lib Dems to oppose UK plan to block porn sites without age checks UK to censor online videos of 'non-conventional' sex acts Restricting niche porn sites is a disaster for people with marginalised sexualities | Pandora Blake

The Liberal Democrats will oppose proposals to force adult websites to impose strict age regulation and empower a regulator to block websites that show a range of sexual acts, calling it the kind of measure one would expect of China or Russia.

The digital economy bill, which will introduce new policies for Britain’s electronic communications infrastructure and services, is due for a report stage vote and third reading in the Commons on Monday afternoon.

It is almost certain to pass the Commons with the backing of Labour and Conservative MPs. But Brian Paddick, the Lib Dems’ shadow home secretary, said: “Clamping down on perfectly legal material is something we would expect from the Russian or Chinese governments, not our own. Of course the internet cannot be an ungoverned space, but banning legal material for consenting adults is not the right approach.

“Liberal Democrats believe in evidence-based policy and it is obvious that blocking perfectly legal content is not just illiberal, it will easily be circumvented. We will not support these measures when they come to the House of Lords.

“The Investigatory Powers Act already has the potential to undermine online privacy and there is very little in the new bill to protect our most sensitive data. Liberal Democrats will do everything possible to ensure that our privacy is not further eroded by this Tory government.”

The digital economy bill would empower the British Board of Film Classification (BBFC) to assess whether websites hosting pornographic material __have strong enough age verification in place, and whether they are showing “prohibited content”, which will be assessed according to the limits it already places on DVD releases.

BBFC censorship bans a range of sexual activities – ranging from female ejaculation to heavy bondage – that are widespread on many mainstream adult websites. However, until now there has been no way to enforce the restrictions on websites based outside of UK jurisdiction. Under the proposed law, websites failing either test could be fined or blocked.

Alistair Carmichael, the Lib Dems’ shadow first secretary of state, is expected to speak against the bill on Monday. A Lib Dem source said there was not much the party’s MPs could do to stop it, but their peers would try to amend it heavily. A better course of action would be to introduce sex and relationship education that addressed pornography, the source added.

There is also the question of whether the bill could break international law. David Banisar, the senior legal counsel at Article 19, a charity that campaigns for freedom of speech, said he expected that it would not survive a challenge at the European court of human rights.

“This is really an ancient battle that has been going on since the internet existed, which is there’s a lot of content out there that some people don’t like and they are trying to restrict it in a way which is overly broad, which catches a lot of things that are – while not desirable to everybody’s tastes – still perfectly legal to see,” he said.

“They are trying to impose fairly archaic rules on new media. And then they are trying to impose them globally, basically, because this really will __have a global impact. Neither of those is really allowable under international law.

“The restrictions have to be proportional, they have to be limited to what is the least restrictive way of dealing with something. So those are certainly issues that would go into an analysis looking at whether this would really even do that.”


Could crowdsourcing expertise be the future of government?

We lack public institutions - a participatory bureaucracy and open parliamentary processes - that know how to tap into the collective intelligence of our communities, and draw power from the participation of the many, rather than the few.

It is the absence of these open institutions, and the resulting failure to take account of the views, voices and know-how of the many disaffected people who voted – and those who did not – during the EU referendum and the US presidential election, which create a vacuum that charismatic demagogues end up filling.

Despite complaints about government dysfunction, Donald Trump has no strategy for how to fix government, save his most recent proposal to kill-two-regulations-for-every-new-one, a page taken from straight from David Cameron’s 2011 “red tape challenge”.

Any progress toward data-driven and evidence-informed policymaking that had been underway in the Obama administration is likely to be systematically rolled back – or at least ignored – under President Trump. There’s nothing to suggest that the White House nudge unit (emulating the UK model), which champions policy experimentation informed by social science research, will survive the chopping block. Nor the data-driven criminal justice initiative that convenes local jurisdictions to commit to empirically measurable reform projects.

Although the first cheques __have been written, investments in precision medicine, which rely on massive quantities of data to deliver more targeted care, may not continue. And longstanding open data priorities shared by the US and the UK governments, which __have led to the publication of tens of thousands of new datasets, may also be dropped.

None of this is any surprise from a candidate whose presidential campaign was punctuated – and thrived in terms of media attention – by a willingness to play fast and loose with facts.

Of course, the new US administration is not alone in a pervasive contempt for expertise. “We’ve had enough of experts” said Michael Gove infamously, with a recent – and not very comforting – qualification that he was targeting “a sub-class of experts, particularly economists, pollsters, social scientists.” And though there are profound differences between events in the US, UK, Hungary, Austria and France, all display a common thread of anti-elitist, anti-establishment sentiment.

The success of populist candidates highlights a distrust of traditional government institutions that has been percolating for some time. There has long been a conflict between governing by experts and democracy. The history of the 20th century is the history of professionalisation, and the creation in government – and elsewhere – of a governing class that relegated citizens to the role of spectators.

Citizen engagement is largely confined to elections, opinion polls or jury service – asking people what they feel, not what they know and can do – even though democracy should be rule by, for and with the people.

However, this dichotomy between equality and expertise, between democracy and professionalism, is false.

In fact, expertise rooted in lived experience or scientific fact is widely distributed in society. We’ve witnessed a shift away from credentialed experts to citizen experts in everything from restaurant reviews to medical advising. There are many more academic researchers than those who are lucky enough to advise government. Expertise is also not limited to academics nor synonymous exclusively with credentials and the universities of higher education that award them.

So how do we link this distributed expertise to governing? How do we create more participatory institutions?

After all, there is scarcely a public decision, which could not benefit from an infusion of greater expertise – both credentialed and experiential – from outside government. We design the delivery of social services without the benefit of insights from the people who receive that service. We make health policies designed to prevent a pandemic without a clear understanding of what people do and do not know about a disease.

Enter crowdsourcing. Now online tools are making it possible for institutions systematically to get more diverse help and more members of the public to participate in problem solving, by sharing their knowledge and skills.

A recent example is Mapaton CDMX, an effort on the part of twelve organizations in Mexico City to get riders of its informal system of 29,000 microbuses to enter GPS data into a shared database, in order to map 1,500 routes. Over two weeks in February 2016, riders mapped almost the entire system and, with this data in hand, an SMS-based service was developed that allows commuters to enter an origin and destination and get route information.

Crowdsourcing is more than brainstorming. It goes beyond asking people to come up with ideas or supply information. On Amnesty International’s Decoders Network, more than 8,000 volunteers from 150 countries participate in projects to identify human rights violations using satellite photographs.

But the challenge of transferring the success of examples like Mapaton or Decoders to transform public institutions is the limitation of the open call. Those with the greatest know-how often don’t hear about the opportunity, and we can’t govern on the basis of serendipity.

For all forms of engagement to be more effective – whether driven top-down or bottom-up – we need to move to smarter crowdsourcing, which uses technology to make opportunities to participate more visible, and integrates them into how decisions get made.

How do we get there?

First, we have to replicate and scale successful examples. The Smarter Crowdsourcing for Zika project, organized by the Inter-American Development Bank and the Governance Lab, coordinated ministries of health, sanitation, and modernization across four governments in Latin America, for a four-month curated crowdsourcing effort. The project matched hundreds of international experts to specific problems associated with Zika, ranging from trash collection to long term care, for a series of six online conferences designed to inform government responses to mosquito-borne viruses.

Second, we must overcome the assumption that the purpose of engagement is purely to build legitimacy. It is not. If the goal of participation is simply communication between government, citizens and interest groups, then we miss the knowledge building aspects of crowdsourcing. These enable us to find missing information, generate alternate hypotheses, undertake tasks, get more eyeballs on a problem, or boots on the ground.

Third, we should move beyond the assumption that participation must be mass-based. Instead, we should construct a range of different practices that speak to people’s knowledge, experience and passions to spot problems, design policies, work on drafts or participate in implementation.

Fourth, in an era of networks, we must ensure that engagement is no longer limited to interest groups – NGOs, unions, women’s groups – and, instead, look to broader networks of people with innovative ideas to contribute. For the Zika project, representatives of the World Health Organization took part, but so did a researcher from Pakistan, who is using predictive analytics to spot dengue, and a social entrepreneur from Brooklyn, who has designed an app to coordinate school children to pick up trash where water collects.

Finally, there is too little understanding of the models of engagement. We need to accelerate social science research on who participates and why if we are to design effective engagement practices that make government work better.

Will we see a shift to more participatory institutions at the national level over the next four years in the US or the UK? It’s unclear, at best. But at the regional, state and local level, we can and must invest in institutional innovation.

This means more than thrusting a Shoreditch or Silicon Valley techie into an open government role for a tour of duty. By divorcing the idea of expertise from elite social institutions and creating tools to enable neutral identification of talent and ability, technology is democratizing expertise. But we need to train today’s public servants to use these tools to leverage data, unlock talent, and connect motivated innovators inside and outside of government.

Over the next fifty years, we will face challenges greater than any previous generation, and we will need to run our institutions differently. People may not be conversant in the sport of politics, but they do possess expertise in spades. Those who govern need to tap into that know-how, not occasionally, but continuously.

Beth Simone Noveck is Rogatz visiting professor at Yale Law School and Hultin global network professor at New York University’s Tandon School of Engineering, where she directs The Governance Lab. Previously, she served in the White House as Deputy Chief Technology Officer and director of its Open Government Initiative. This article is based on her 2016 Campaign for Social Science/SAGE annual lecture, the full text of which is available here.

Netflix lets users download videos for offline viewing

Netflix has begun rolling out the ability to download videos from its streaming service to smartphones and tablets for offline viewing.

Offline viewing is arguably the most demanded feature by users, and one of the things that differentiated other services including Amazon’s Video streaming service and pay TV services such as Sky and Virgin.

Eddy Wu, Netflix director of product innovation, said: “While many members enjoy watching Netflix at home, we’ve often heard they also want to continue their Stranger Things binge while on airplanes and other places where internet is expensive or limited.”

The feature is available from today on Android and iOS devices, and includes many TV series and movies, including Netflix’s original content such as Orange is The New Black, Narcos and the recently released The Crown. The company says that more will be made available soon.

Video downloads will be provided at no added cost within the service’s existing monthly subscription fees, which start at £5.99 a month in the UK ($7.99 in the US).

Netflix is locked in a battle with streaming rival Amazon, as well as traditional broadcasters which are making in-roads into “over the top” streaming services.

Netflix started by using expansive libraries of previously broadcast TV and films to lure subscribers, but in recent years it has increasingly focussed on original content it pays to show first.

Netflix has bought shows such as Stranger Things, Kevin Spacey’s House of Cards and Tina Fey’s Unbreakable Kimmy Schmidt, helping to push its total budget for programming this year to $6bn. It also has huge liabilities for its back catalogue of shows from other networks totalling $11.4bn.

Amazon has also put money into original shows such as Transparent, Alpha House, The Man in the High Castle and the recent release of The Grand Tour – a Top Gear-like show from Jeremy Clarkson, James May and Richard Hammond, which is being used to spearhead a global rollout to match Netflix’s availability in 130 countries.

Though Amazon has the advantage of bundling its video subscription as part of its Prime delivery service, Netflix has managed to maintain a lead in video. In October it reported that it had almost 87 million subscribers worldwide, and although Amazon does not break out the number of people who use its video service, estimates in the UK and US suggest Netflix is more popular.

Enders Analysis TV analyst Toby Syfret said that people who subscribe to Amazon are more likely to also subscribe to Netflix, and “as long as you keep the price down” most consumers would not feel forced to choose between the two. However, he said the increasing competitiveness of Amazon’s service will still __have spurred Netflix match its ability to offer downloads.

“What they __have always tried to do is make their product easy and uncomplicated,” he said. “But [it is now] a question of being able to offer what the rest of the market does, and when it is Amazon that does it....”

Though Netflix and Amazon are considered the leaders in video streaming, both have followed in the footsteps of the BBC, which led the way by launching iPlayer in 2007 and has allowed users to download programmes for offline watching on mobile devices since 2014.

However, Syfret said the service was no longer quite so cutting edge. “It’s always difficult when you start early. When you are a trailblazer really it looks frontline, then other things come along. At some point the BBC will have to decide whether it wants to re-engineer things.”

National Lottery: 26,500 players' online accounts accessed

About 26,500 National Lottery players are facing compulsory password resets on their online accounts after they were apparently accessed by cybercriminals.

Camelot, the firm that operates the game, said it had become aware of “suspicious activity on a very small proportion” of accounts, and it was now taking steps to understand what had happened. Logins may __have been stolen from other websites where players use the same details, it said.

Cybercriminals had not been able to access “core National Lottery systems”, Camelot added.

“We do not hold full debit card or bank account details in National Lottery players’ online accounts and no money has been taken or deposited,” the company said in a statement.

“However, we do believe that this attack may __have resulted in some of the personal information that the affected players hold in their online account being accessed.”

The National Lottery has about 9.5 million customers registered to play online. Of the compromised accounts, fewer than 50 had been suspended since the attack on Camelot’s servers, after some personal details were changed, the company said, although “some of these details may have been changed by the players themselves”.

“We’d like to reassure our customers that protecting their personal data is of the utmost importance to us,” Camelot’s statement added. “We are very sorry for any inconvenience this may cause to our players and would like to encourage those with any concerns to contact us directly, so we can discuss it with them in more detail.”

The kind of confidential personal information accessed could be used to build false customer profiles or commit fraud later on, said one cybersecurity expert.

Chris Hodson, from information security firm Zscaler, added: “With no technical details included in the National Lottery’s statement about how the data was exfiltrated, just that it was, we can only speculate as to the tactics of these hackers.

“The act of stealing personal information from these accounts but leaving financial credentials untouched, also highlights that the motives of the criminals was not immediate financial fraud but highly sought personal identifiable information.”

A spokesman for the Information Commissioner’s Office said Camelot had submitted a breach report on Tuesday night. “The Data Protection Act requires organisations to do all they can to keep personal data secure – that includes protecting it from cyber-attacks,” he said.

“Where we find this has not happened, we can take action. Organisations should be reminded that cybersecurity is a matter for the boardroom, not just the IT department.”

Camelot said it was working with the National Crime Agency and the National Cyber Security Centre, a new division of GCHQ, to investigate the incident.

An NCA spokesman said: “We can confirm that we are investigating an incident linked to Camelot. As our inquiries are ongoing, we cannot comment further at this time.”

The National Lottery hack is just the latest online security breach to hit British consumers this year. Earlier this month, Tesco Bank fell victim to a cyber-attack which resulted in it paying out an estimated £2.5m to 9,000 customers.

Eight million UK-based Yahoo users were affected when the internet firm’s defences were breached in September, leading to sharp criticism when it emerged that crucial account details were not encrypted.

And in April more than 15,000 expectant parents had their data – including email addresses, usernames and passwords – compromised after a hack on the National Childbirth Trust.

Nick Gibbons, partner at insurance and risk law firm BLM, said that Camelot’s statement seemed to fail to acknowledge the significance of the invasion of its customers’ privacy, and the risk posed by the potential disclosure of their personal information.

“While perhaps less important and embarrassing than that seen in the Ashley Madison case, some people will not want the fact that they bet on the national lottery to be made public,” he said.

How to solve Facebook's fake news problem: experts pitch their ideas Fake news and a 400-year-old problem: how can we end the ‘post-truth’ crisis? Obama is worried about fake news on social media – and we should be too

The impact of fake news, propaganda and misinformation has been widely scrutinized since the US election. Fake news actually outperformed real news on Facebook during the final weeks of the election campaign, according to an analysis by Buzzfeed, and even outgoing president Barack Obama has expressed his concerns.

But a growing cadre of technologists, academics and media experts are now beginning the quixotic process of trying to think up solutions to the problem, starting with a rambling 100+ page open Google document set up by Upworthy founder Eli Pariser.

The project has snowballed since Pariser started it on 17 November, with contributors putting forward myriad solutions, he said. “It’s a really wonderful thing to watch as it grows,” Pariser said. “We were talking about how design shapes how people interact. Kind of inadvertently this turned into this place where you had thousands of people collaborating together in this beautiful way.”

In Silicon Valley, meanwhile, some programmers __have been batting solutions back and forth on Hacker News, a discussion board about computing run by the startup incubator Y Combinator. Some ideas are more realistic than others.

“The biggest challenge is who wants to be the arbiter of truth and what truth is,” said Claire Wardle, research director for the Tow Center for Digital Journalism at Columbia University. “The way that people receive information now is increasingly via social networks, so any solution that anybody comes up with, the social networks __have to be on board.”

Journalists, the public or algorithms?

Most of the solutions fall into three general categories: the hiring of human editors; crowdsourcing, and technological or algorithmic solutions.

Human editing relies on a trained professional to assess a news article before it enters the news stream. Its proponents say that human judgment is more reliable than algorithms, which can be gamed by trolls and arguably less nuanced when faced with complex editorial decisions; Facebook’s algorithmic system famously botched the Vietnam photo debacle.

Yet hiring people – especially the number needed to deal with Facebook’s volume of content – is expensive, and it may be hard for them to act quickly. The social network ecosystem is enormous, and Wardle says that any human solution would be next to impossible to scale. Humans are also partial to subjectivity, and even an overarching “readers’ editor”, if Facebook appointed one, would be a disproportionately powerful position and open to abuse.

Crowdsourced vetting would open up the assessment process to the body politic, having people apply for a sort of “verified news checker” status and then allowing them to rank news as they see it. This isn’t dissimilar to the way Wikipedia works, and could be more democratic than a small team of paid staff. It would be less likely to be accused of bias or censorship because anyone could theoretically join, but could also be easier to game by people promoting fake or biased news, or using automated systems to promote clickbait for advertising revenue.

Algorithmic or machine learning vetting is the third approach, and the one currently favored by Facebook, who fired their human trending news team and replaced them with an algorithm earlier in 2016. But the current systems are failing to identify and downgrade hoax news or distinguish satire from real stories; Facebook’s algorithm started spitting out fake news almost immediately after its inception.

Technology companies like to claim that algorithms are free of personal bias, yet they inevitably reflect the subjective decisions of those who designed them, and journalistic integrity is not a priority for engineers.

Algorithms also happen to be cheaper and easier to manage than human beings, but an algorithmic solution, Wardle said, must be transparent. “We have to say: here’s the way the machine can make this easier for you.”

How to treat fake news, exaggeration and satire on Facebook

Facebook has been slow to admit it has a problem with misinformation on its news feed, which is seen by 1.18 billion people every day. It has had several false starts on systems, both automated and using human editors, that inform how news appears on its feed. Pariser’s project details a few ways to start:

Verified news media pages

Similar to Twitter’s “blue tick” system, verification would mean that a news organization would have to apply to be verified and be proved to be a credible news source so that stories would be published with a “verified” flag. Verification could also mean higher priority in newsfeed algorithms, while repeatedly posting fake news would mean losing verified status.

Pros: The system would be simple to impose, possibly through a browser plug-in, and is likely to appeal to most major publications.

Cons: It would require extra staff to assess applications and maintain the system, could be open to accusations of bias if not carefully managed and could discriminate against younger, less established news sites.

Separate news articles from shared personal information

“Social media sharing of news articles/opinion subtly shifts the ownership of the opinion from the author to the ‘sharer’,” Amanda Harris, a contributor to Pariser’s project, wrote. “By shifting the conversation about the article to the third person, it starts in a much better place: ‘the author is wrong’ is less aggressive than ‘you are wrong’.”

Pros: Easy and cheap to implement.

Cons: The effect may be too subtle and not actually solve the problem.

Add a ‘fake news’ flag

Labelling problematic articles in this way would show Facebook users that there is some question over the veracity of an article. It could be structured the same way as abuse reports currently are; users can “flag” a story as fake, and if enough users do so then readers would see a warning box that “multiple users have marked this story as fake” before they could click through.

Pros: Flagging is cheap, easy to do and requires very little change. It would make readers more questioning about the content they read and share, and also slightly raises the bar for sharing fake news by slowing the speed at which it can spread.

Cons: It’s unknown whether flagging would actually change people’s behavior. It is also vulnerable to trolling or gaming the system; users could spam real articles with fake tags, known as a “false flag” operation.

Add a time-delay on re-shares

Articles on Facebook and Twitter could be subject to a time-delay once they reach a certain threshold of shares, while “white-labeled” sites like the New York Times would be exempt from this.

Pros: Would slow the spread of fake news.

Cons: Could affect real news as much as fake, and “white-labelling” would be attacked as biased and unfair, especially on the right. Users could also be frustrated by the enforced delay: “I want to share when I want to share.”

Partnership with fact-checking sites, such as Snopes

Fake news could automatically be tagged with a link to an article debugging it on Snopes, though inevitably that will leave Facebook open to criticism if the debunking site is attacked as having a political bias.

Pros: Would allow for easy flagging of fake news, and also raise awareness of fact-checking sources and processes.

Cons: Could be open to accusations of political bias, and the mission might also creep: would it extend to statements on politicians’ pages?

Headline and content analysis

An algorithm could analyze the content and headline of news to flag signs that it contains fake news. The content of the article could be checked for legitimate sourcing – hyperlinks to the Associated Press or other whitelisted media organizations.

Pros: Cheap, and easily amalgamated into existing algorithms.

Cons: An automated system could allow real news to fall through the cracks.

Cross-partisan indexing

This system would algorithmically promote non-partisan news, by checking stories against a heat-map of political opinion or sharing nodes, and then promoting those stories that are shared more widely than by just one part of the political spectrum. It could be augmented with a keyword search against a database of language most likely to be used by people on the left or the right.

Pros: Cheap, and easily combined with existing algorithms. Can be used in partnership with other measures. It’s also a gentler system that could be used to “nudge” readers away from fake news without censoring.

Cons: Doesn’t completely remove fake news.

Sharer reputation ranking

This would promote or hide articles based on the reputation of the sharer. Each person on a social network would have a score (public or private) based on feedback from the news they share.

Pros: Easy to populate a system quickly using user feedback.

Cons: User feedback systems are easy to game, so fake news could easily be upvoted as true by people who want it to be true, messing up the algorithm.

Visible design cues for fake news

Fake news would come up in the news feed as red, real news as green, satire as orange.

Pros: Gives immediate visual shorthand to distinguish real from fake news. Could also be a browser plug-in.

Cons: Still requires a way to distinguish one from the other, whether labor-intensively or algorithmically. Any mistake with an algorithm, say one that puts Breitbart articles in red, would open Facebook up to accusations of bias.

Punish accounts that post fake news

If publishing fake news was punishable with bans on Facebook then it would disincentivise organizations from doing so.

Pros: Attacks the problem at its root and could get rid of the worst offenders.

Cons: The system would be open to accusations of bias. And what about satire, or news that’s not outright fake but controversial?

Tackling fake news on the web outside Facebook

News is shared across hundreds of other sites and services, from SMS and messaging apps such as WhatsApp and Snapchat, to distribution through Google’s search engine and aggregations sites like Flipboard. How can fake news, inaccurate stories and unacknowledged satire be identified in so many different contexts?

Fact-checking API

A central fact-checking service could publish an API, a constantly updated feed of information, which any browser could query news articles against. A combination of human editing and algorithms would return information about the news story and its URL, including whether it is likely to be fake (if it came from a known click-farm site) or genuine. Stories would be “fingerprinted” in the same way as advertising software.

People could choose their fact-checking system – Snopes or Politifact or similar – and then install it as either a browser plug-in or a Facebook or Twitter plug-in that would colour-code news sources on the fly as either fake, real or various gradations in between.

Pros: Human editors would become less necessary as the algorithm learns, and wouldn’t have to check each story individually. Being asked to choose a fact-checker might encourage critical thinking.

Cons: Will be labor-intensive and expensive, especially at first. It could be open to accusations of bias, especially once the algorithm takes over from the human input. Arguably only those already awake to the problem would choose to opt in, unless a platform like Facebook or Google assimilates it as standard.

Page ranking system

Much like Google’s original PageRank algorithm, a system could be developed to assess the authority of a story by its domain and URL history, suggested Mike Sukmanowsky of Parse.ly.

This would effectively be, Sukmanowsky wrote, a source reliability algorithm that calculated a “basic decency score” for online content that pages like Facebook could use to inform their trending topic algorithms. There could also be “ratings agencies” for media; too many Stephen Glass-style falsified reporting scandals, for example, and the New York Times could risk losing its triple-A rating.

Pros: Relatively easy to construct using open-sourcing, and could be incorporated into existing structures. Domains that serially propagate fake information could be punished by being downgraded in rank, effectively hiding them.

Cons: Little recourse for sites to appeal against their ranking, and could make it unfairly difficult for less established sites to break through.

Connect fake news to fact-checking sites

Under this system, fake news would be inter-linked (possibly through a browser plug-in) to a story by a trusted fact-checking organization like Snopes or Politifact. (Rbutr already does this, though on a modest scale.)

Pros: Connects readers with corrections that already exist. Facebook or Google could use a database like Snopes in its algorithm.

Cons: Unless this kind of system gets hardwired into Facebook or Google, people have to want to know if what they’re reading is fake.

On current evidence, many people feel comfortable when presented by news which doesn’t challenge their own prejudices and preferences – even if that news is inaccurate, misleading or false.

What many of these solutions don’t address is the more complex, nuanced and long-term challenge of educating the public about the importance of informed debate – and why properly considering an accurate, rational and compelling viewpoint from the other side of the fence is an essential part of the democratic process.

“There’s a feeling that in trying to come up with solutions we risk a boomerang effect that the more we’re debunking, the more people will disbelieve it,” said Claire Wardle. “How do we bring people together to agree on facts when people don’t want to receive information that doesn’t fit with how they see the world?”

Jasper Jackson contributed to this report

Jeremy Hunt proposes ban on sexting for under-18s A normalization of violence: how cyberbullying began and how to fight it

Under-18s should be prevented by social media companies from texting sexually explicit images, the health secretary has said. Giving evidence to the Commons health committee on suicide prevention efforts, Jeremy Hunt also called for a crackdown on cyberbullying by the technology industry via the introduction of software that can detect when it is happening.

Hunt said social media firms needed to do more to combat the culture of online intimidation and sexual imagery, which is having a negative impact on the mental health of young people.

“I think social media companies need to step up to the plate and show us how they can be the solution to the issue of mental ill health amongst teenagers, and not the cause of the problem,” he said. “There is a lot of evidence that the technology industry, if they put their mind to it, can do really smart things.

“For example, I just ask myself the simple question as to why it is that you can’t prevent the texting of sexually explicit images by people under the age of 18, if that’s a lock that parents choose to put on a mobile phone contract. Because there is technology that can identify sexually explicit pictures and prevent it being transmitted.

“I ask myself why we can’t identify cyberbullying when it happens on social media platforms by word pattern recognition, and then prevent it happening. I think there are a lot of things where social media companies could put options in their software that could reduce the risks associated with social media, and I do think that is something which they should actively pursue in a way that hasn’t happened to date.”

Members of the health committee urged Hunt to put more resources into suicide prevention.