Category Archives: Security & privacy

A firm stance on ePrivacy regulations in Europe

computerEurope is preparing to take a firm stance on new digital privacy rules. This will likely have a huge impact on online advertising, and any company who wants to expand their services to Europe.

“Would you allow a stranger to go into your bedroom or look through your drawers without your permission?” she asked. “No, you probably wouldn’t.” The same concept, she added, should apply to the online world.


Making blockchains anonymous

Blockchain transactions that use public ledgers, such as Bitcoin, are not truly anonymous. It is possible to link an address to an individual and expose all the historical transactions.

Now a recent article in MIT Tech Review has described how blockchains can be combine with zero-knowledge proofs so transactions can be provable, but truly anonymous. Zero-knowledge technology allows someone to prove that something is true (such as the fact that your have reached the legal drinking age) without revealing any information about yourself.

A truly anonymous block chain may be a game-changer for private transactions.


Facebook wants your embarrassing images

shy girlFacebook is trialing a service in Australia where people upload their sexy, embarrassing images to prevent them from being published in the social network. This service uses a technical procedure called “hashing.”

Using hashing algorithms to take a mathematical summary or “fingerprint” of a photo is a cool technique. It can be used to recognize an image that has been seen before without actually viewing the image. This is very useful for detecting duplicates and creating watch lists.

Special “fuzzy” hashing techniques can even recognize a photo when it has been cropped or changed slightly. Microsoft has developed a PhotoDNA technique that is particularly good at this, and it is a key tool used by INTERPOL and other law enforcement agencies for processing child exploitation images. By quickly recognizing images that have been seen before, investigators can focus their efforts on new images that might provide fresh evidence.

Microsoft also offers a cloud service so that online sites can screen images that are uploaded by users.

Now Facebook is starting a trial using the same kind of technology. The idea is that revenge-porn images, usually explicit images being distributed without the subject’s knowledge and consent, can be recognized and blocked before they are put online.

This is an interesting idea, but how is Facebook to know if the image being submitted for blocking is actually a revenge-porn image – maybe someone is instead trying to interfere with their business competitors, for example.

The solution that Facebook describes is to have staff members review each submitted image to make sure that it violates their policies on explicit images. Obviously this is rather privacy invasive, and presents further risks of exposure and embarrassment.

More concerning is that Facebook intends to keep the submitted images for some time (with some form of blurring), instead of immediately deleting them once the hashes are created. This obviously creates further privacy and security risks.

Facebook is trying to tackle a serious problem using effective technical methods, but the devil is going to be in the implementation. It is not clear why the images have to be retained, for example, once the hashes are created. It will be interesting to follow the results of the trial – hopefully Facebook will let us know how it worked out.


Recognizing faces is hard

When studying automatic, biometric face recognition it is import to understand what an appropriate baseline performance level is. Research has often shown that humans are actually not that good at doing face recognition, especially when comparing a photograph with a person in front of them.

Here is some more research in this area.

Experiments suggest that telling if two unfamiliar faces are the same or different is no easy task. … A new article in Applied Cognitive Psychology confirms these fears, suggesting that our real-world capacity to spot fakes in their natural setting is even worse than imagined.


Fingerprints are Usernames, not Passwords

From Dustin Kirkland, an interesting way to think about fingerprints:

I could see some value, perhaps, in a tablet that I share with my wife, where each of us have our own accounts, with independent configurations, apps, and settings.  We could each conveniently identify ourselves by our fingerprint.  But biometrics cannot, and absolutely must not, be used to authenticate an identity.  For authentication, you need a password or passphrase.  Something that can be independently chosen, changed, and rotated.  I will continue to advocate this within the Ubuntu development community, as I have since 2009.

From the Canyon Edge: Fingerprints are Usernames, not Passwords.

Trends in Biometrics Research: Notes from BTAS 13

I am currently at the BTAS conference in Washington DC getting up to speed on the latest research on biometrics. Here are a few trends I have observed so far:

  • an obvious lack of research on what I would call traditional biometric problems, including fingerprint matching, iris matching, and face recognition for high quality, passport style photos. These appear to be mostly solved problems.
  • recognition of spoofing as a challenging problem, as is evident in the quick attacks against the iPhone 5S fingerprint sensor,
  • a continuing trend to focus on challenging acquisition environments, included face photos taken at an angle (faces in the wild) and matching from video.
  • more interest in different kinds of sensors, including cell phone cameras, touch pads, and the Kinect.

Here is some more information about the conference:

BTAS 2013 … is the premier research conference focused on all aspects of biometrics. It is intended to have a broad scope, including advances in fundamental signal processing, image processing, pattern recognition and statistical and mathematical techniques relevant to biometrics, new algorithms and/or technologies for biometrics, analysis of specific applications, and analysis of the social impact of biometrics technology.

BTAS 2013 | Biometrics: Theory, Applications and Systems.

Anonymity, Encryption, and Free Expression

 Photo Credit: Bindaas Madhavi

Photo Credit: Bindaas Madhavi

Here is an interesting EFF article about the recent report from the Human Rights Council on anonymity, encryption, and free speech.

Today, governments all around the world are seeking to ban, block, or redesign personal communications technologies based on a misguided notion that these technologies are too secure.

Anonymity, Encryption, and Free Expression: What Nations Need to Do | Electronic Frontier Foundation.

New book chapter: Harm mitigation from the release of personal identity information

A new book chapter by Jean Camp and myself is now available. It appears in a new collection edited by George Yee titled Privacy Protection Measures and Technologies in Business Organizations: Aspects and Standards. Here is the abstract, citation information, and link to the book.

In August 2007 approximately 445,000 letters were sent to retirees who belonged to the California Public Employees’ Retirement System (CalPERS). This was a routine mailing, but all or a portion of each pensioner’s Social Security Number (SSN) was printed on the address panel of the envelopes, making this event all but ordinary. This massive breach of sensitive SSNs, along with names and addresses, exposed these people to potential identity theft and fraud. What are the harms associated with a data breach of this nature? How can those harms be mitigated? What are, or should be, the costs and consequences to the organization releasing the data? While it is very difficult to predict the specific consequences of a data breach of this nature, a statistical model can be used to estimate the likely financial repercussions for individuals and organizations, and the recent settlement in the TJX case provides a good model of harm mitigation that could be applied in this case and similar cases.

Patrick, A. S., & Camp, L. J. (2012). Harm mitigation from the release of personal identity information. In Yee, G. O. (Ed.), Privacy Protection Measures and Technologies in Business Organizations: Aspects and Standards. (pp. 309-330).

Funding available for privacy research and education in Canada

The Office of the Privacy Commissioner of Canada is calling for proposals for cutting-edge privacy research and public education projects in Canada. The application deadline is March 14, 2011.

The Office is interested in receiving research proposals focusing on four priority areas:

private1) identity integrity and protection,

2) information technology,

3) genetic privacy, and

4) public safety.

However, the Office will continue to accept research proposals on issues that fall outside these areas.

As well, the Office invites proposals to fund public education and regional outreach initiatives that aim to inform Canadians about their privacy rights and how they may better protect their personal information.

All proposals will be evaluated on the basis of merit by OPC officials, and the maximum amount that can be awarded for each research or public education project is $50,000.  (A maximum of $100,000 can be awarded per organization.)

Not-for-profit organizations, including education institutions and industry and trade associations, are eligible, and this includes consumer, voluntary and advocacy organizations.

Anatomy of a successful online attack

maskArs Technica has an interesting article describing in detail how the group Anonymous was able to penetrate and embarrass the security firm HBGary and the site.

This was not a particularly advanced attack, but rather one that focused on known weaknesses, bad practices, and social engineering of people who should know better.

Most frustrating for HBGary must be the knowledge that they know what they did wrong, and they were perfectly aware of best practices; they just didn’t actually use them. Everybody knows you don’t use easy-to-crack passwords, but some employees did. Everybody knows you don’t re-use passwords, but some of them did. Everybody knows that you should patch servers to keep them free of known security flaws, but they didn’t.