Most of what we, I.T. people, do is designing systems and software to be able to provide features, to add a capacity to an information system. Even negative requirements are mostly the ability to restrict access to information. Moreover, years of hectic requirements made designers careful to let some doors opened for future new features.
This has been very useful to add many traceability features later on in many systems which is good for testers. And we can often trace every step of a system behaviour in its data (including sometimes prolific logs).
But what if we need not to be able to do something?
Some recent cases show a system not being able to do is a desirable feature for some stakeholders.
- Martin Fowler in oct. 2014 GOTO conference highlighted how important it is for a communication service not to be able to read its users e-mails.
- We had interesting discussions with a client on some design options to have opposable elements to prove some exchanged occurred, but also some didn’t, and how the system and its data had to not able to change some business logs in that regard.
- Apple recently presented a payment system that distinguished from a lot of current systems by not being able to reveal some information (especially name and card number to the merchant, and all of the transaction information to Apple).
- Snapchat made a lot of buzz based mainly on its goal not to keep or disseminate snaps (and is now facing some issues around how much “not to”).
- Microsoft is currently fighting a warrant to be able not to give access to some content hosted overseas to a US judge (instead of not being able to).
- Google and Apple are putting in place some features in their mobile OSes not to be able to retrieve information from them.
- LinkedIn password hack exposed a lot of users’ accounts but on the same circumstances Twitter did not mainly because they implemented a solution not to be able to find passwords back from their hash.
Those “designed not to” features are usually not first in the initial feature list of a service, so clear candidates to be forgotten in a long backlog. So they are at best not included in the initial design but on a refactoring sprint (let’s say sometimes refactoring is not always like design and has its own least effort issues as un-designing is a difficult art).
On the other hand the consequences of missing those “designed not to” features can be important business-wise, they can even be show stoppers for those missing them or become a show enabler for those having considered them.
Of course, some “designed not to” features can be difficult to detect or to “not implement”, but some orientations can be evaluated
- Manage your testing effort closely. Especially unit-testing should cover some “not to” cases and testers often have a good sense of the “surface” expansion.
- During high level architecture activities make some cross coverage analysis (not only requirement coverage but also solution surface justification), extra surface also often appears between architectural levels (requirements expansion from level to level). And don’t forget risk analysis (low probability/low unitary impact is not a sufficient argument to neglect a requirement, consequences can be huge as well as collective impact).
- Learn from patterns and solution that segment information and accesses, provide unidirectional links, filter information, anonymization solutions and of course how cryptographic systems work (especially asymmetric ones).
- Use business leverage : some editors provide solution with defined features that have some limitation to extend or wrongly use, SaaS providers can present additional guaranties as you often can’t access all raw information but only the result of the service and some can even guarantee that it won’t happened and/or will be traced independently. In both cases you reduce the risk of the organisation being accused of abnormal use of the information.
Don’t be the next one to lose some information you didn’t even use or some opportunities on collected data you didn’t even leverage. Don’t spend your time fighting social or legal litigation you have nothing to do with in the first place. Be the next innovator that takes 1% on all transactions with a nice piece of software. Be the next service provider with a service that includes safety in a Big Data and IoT world.
About Claude Bamberger
Claude Bamberger has been an information systems architect since his first job in 1994, realizing in nearly 20 years that it’s a role that one becomes more than one is, mainly by enlarging the technology scope known, the skills mastered, the contexts experienced. Particularly interested in technologies and what they can mean in improving business results, Claude went from consulting in the early days of object-oriented development and distributed computing to projects, team, and I.T. department management during half a decade to come back to consulting in 2008 in Sogeti after an innovative start-up co-founding in the Talent Management field.
More on Claude Bamberger.