20 June 2018

Institutional Investors in Crypto-Land

I think there are a few things going on around the "institutional investor" theme in crypto-land...

One reason for our not seeing a flood of Big Money into cryptos is that these guys are very, very patient. Their salaries and bonuses do not depend on them moving quickly, and there's a lot for them to do in preparation for tackling an entirely new asset class.

Two is that some heat has been taken out by the derivatives. They can trade without having to deal with the problems of holding actual crypto-currencies — all the custodial challenges and regulatory uncertainties.

Three: They're probably intensively consulting with the regulators. They live in a highly regulated climate, and Legal will want to know exactly when and how they're crossing lines before doing so.

Four: Custodial solutions for large organisations are in hot development, but they're certainly far from developed or tested anywhere close to the levels an institution would require. You do NOT want a billion USD worth of BTC sitting in a hot wallet when some janitor walks out the door with the keys on a flash-stick.

So I think there are, and will be for a while yet, some delays in seeing actual institutional flows into the crypto markets. But they will come. A future contract on the price of BTC may make for nice trading, but it's not ever going to be the same as an investment in the real thing.

17 May 2018

Do We Really Need StableCoins?

When I was a very small child growing up in South Africa, most of the world's fiat currencies were backed by Gold. One South African Rand bought you two American Dollars, and that rate of exchange never varied. The price of Gold was fixed by an Agreement of Nations at $35 per troy ounce.
I suppose that, with the value of the world's money being fixed by decree to the value of shiny metal, it is to be expected that things will favour the currency of that country where most of the shiny metal was mined. At the time South Africa mined substantially more than half the world's annual tonnage of new Gold, and its currency was as hard as rocks. Today you need almost twelve-and-a-half Rands to buy one single US Dollar (though by the time you read this that exchange rate will be different), and that US Dollar is a meagre shadow of its former self. Its value has been eroded by something like 87% since Nixon abandoned the Gold Standard back in '71. I still remember banknotes that had printed upon them, "I promise to pay the bearer on demand." I occasionally wonder how many people actually pitched up at the Reserve Bank in Pretoria to ask for their One Rand in Gold in exchange for that bit of paper. I don't suppose there were many, because of course there was a Catch. The best Catch that ever was. Catch 22. It was illegal to own Gold metal, so any exchange of paper money for the Real Thing could only result in your immediate detention for hoarding Gold. Standard M.O. for sovereigns, really...
The thing is that everything prices, wages, relative currency values, and so on — were all pretty stable back then. At least major governments tried very, very hard to make it appear so despite many people harbouring serious doubts that it was true. So a lot of people have come to believe that ditching the Gold Standard was a Bad Idea because it resulted in a bunch of volatility in the prices and monetary valuations of all things, a lot of fluctuation and a WHOLE LOT of inflation eroding the value of the very thing we use to value our time, goods and services. Money itself. These people have become convinced that if only we returned to using Gold as the basis for valuing our money, many of the evils of the world would be corrected. Stable money would be the Big Fix for everything that's wrong with the world.
And, of course, they're wrong. There never was such a thing. The stable value of Gold was simply a form of government-enforced price-fixing, an agreement by that cartel of sovereigns who were best placed to impose their will by dint of terrible weapons.
Fast forward to the present day where we find ourselves in the Alice in Wonderland world of neonatal crypto-assets, and one of the big complaints is that crypto-money is too volatile, the wild price fluctuation rendering it nearly unusable as actual currency for the purchase of goods and services. (Of course that volatility is the very thing we love when we go out to buy our new Lambo from the profits of our cryptocurrency speculation, but that's another conversation for another day.) What a lot of people claim to want is a StableCoin — a crypto-currency whose value changes very little, preferably not at all. Then when the price of Bitcoin takes a dive they could convert their stash into the Stable Coin (further fueling the fall in the price of Bitcoin as the mass selloff takes its course) and preserve the value of their money.
Of course we could just convert our Bitcoin back to fiat when the price takes a dive — most will call that "taking profit" or "taking risk off the table" — but that seems contrary to the whole Crypto Programme. It buggers with the notion that we ought to be taking control of money away from governments if, as soon as we run into headwinds, we run for refuge in the arms of Mommy and Daddy. So the idea is that we should make our own stable money as safe harbour from the Winds Of Adversity.
Ironically, when we look at this whole situation a little more critically, you'll notice that we keep blithering about the "price" of various assets — Bitcoins and so on —  without considering what unit of account we're using to quantify that price. It's usually — almost always — the US Dollar — itself hardly a bastion of stability or safety. Anybody who has speculated in the Forex Markets has surely profited or lost at the mercy of the fluctuation in the relative value of the USD, not to mention all the other national fiat currencies.
Hell, even the value of Gold wobbles up and down at the whims of the marketplace, and those wobblings have also been the source of fortunes made and lost to the vagaries of the Hidden Hands.
The point is that it's all relative.
So what EXACTLY does it mean to create a StableCoin that is pegged to the US Dollar or backed by Gold when the values of those underlying assets wax and wane in imitation of near-perfect random noise? What does it boot us when one DAI — by exceedingly clever and cunning means — maintains its value via Smart Contract at Pretty Close To One Dollar when the very value of that Dollar wanders drunkenly up and down in response to random draughts of hot air?
Initially I was all for having StableCoins, but over time I've become rather sceptical about their value and their usefulness. I certainly think they are a good thing in the short to medium term, while the rest of the crypto-money space gets its shit sorted out. StableCoins can and will provide a useful place to store value in the face of speculator-driven price changes to other crypto-assets, and they serve as a useful way to say to a sceptical world, "Look! See! Crypto-assets need NOT be unruly wild beasts. They CAN be tamed!"
But over the longer run I am a bit more dubious about their place in the ecosystem. Over time, as institutional investors increase their shadowy presence in the crypto markets, as Central Banks begin to participate in the flows of crypto-money, and as financial regulators and tax authorities develop the tools to impose a degree of order on the space, we should expect volatility to decrease. Lambos will, once more, become less easily attainable, but the true value and purpose of crypto-money will be more closely realised and we'll find it easier and easier to get paid with crypto-money, to pay our rent with them, use LTC to buy groceries, BCH for Fiats, XLM for plane tickets, and so on.
And then,... as the "prices" of crypto-assets become less wild,... then we can become more comfortable with letting those valuations fluctuate against the assets we use a units of account — the thing that gives some meaning to the numbers we assign to any asset to tell us its value. In return, quite likely, we will start to use these crypto-currencies as units of account themselves. We'll start to quote the price of Euros in ETH and the price of Rands in DSH. Just like the EUR/USD exchange rate wobbles up and down from one trade to the next, so will the XAU/BTC and the OIL/XMR, and we'll become more comfortable with that.
So, for a little while at least, StableCoins seem to have a useful niche to fill in the crypto ecosystems, but in the long run?

I am not at all sure they make a shred of sense.

14 April 2018

The Futures of Bitcoin

(Originally published at https://medium.com/@mikro2nd/the-futures-of-bitcoin-eb226927cb94 on 30/3/2018)


Leaving aside all the hype and hyperventilating, the personalities and poison, the shills and snake-oil, what might be The Future of Bitcoin?
First a quick and cursory glance at its past — just to give a little context. The original Bitcoin paper starts out forthrightly:
A purely peer-to-peer version of electronic cash would allow online payments to be sent directly from one party to another without going through a financial institution.
Satoshi’s stated intent was to implement a trust-free system of payments. So far Bitcoin has failed to be this. Instead it has become primarily a vehicle for speculation. Yes, a few hardy evangelists do trade using BTC, but they’re few and far between. Certainly every time I have asked someone to pay me using Bitcoin I was met with something between blank incomprehension and outright hostility.
The reasons are myriad and intertwined, and not particularly interesting to me here and now, save that it helps us to place where Bitcoin is in the landscape of status-quo-challenging innovations. At present — and this implies directly that things may well change in the future, perhaps even the quite near future — at present Bitcoin largely fails to serve as a Medium Of Exchange.
Generally we want a currency to provide, in some measure, the following three functions:
  • Medium of Exchange — a means to facilitate the barter of goods and services while eliminating the disadvantages of direct barter,
  • Store of Value — a way to hoard our wealth while we wait for something to spend it on, and
  • Unit of Account — a measure of how much value we’re storing or exchanging.
Bitcoin also fails as a Store Of Value due to its wild volatility — the very attribute that speculators love so much.
And I certainly know of nobody who uses Bitcoin — or any other cryptocurrency — as a Unit Of Account.
It is these latter two failings that drive the first. If I were a merchant, pricing goods in BTC is problematic. Adoption is not wide enough that I can in turn pay my suppliers, my landlord or my taxes in BTC, so I am tethered to the fiat world, no matter how firmly I may be a crypto-future true believer. Even if I advertise pricing and accept payment in BTC, the real price of my wares is constantly referenced back to fiat — the BTC/fiat exchange rate. And that, as we’ve observed, fluctuates wildly — that damned volatility at work. I suspect this is the main reason we’ve seen a number of vendors exit from BTC pricing and payment rails. Indeed the volatility is such that you’d have to reprice on a minute-by-minute basis, and even then, if a transaction takes more than a second or two to be confirmed, as happens during periods when the Bitcoin network is congested, you’re unlikely to receive the same value you invoiced.
Even assuming I do sell some stuff and get paid for it in BTC, there is every reason to believe that the value of BTC I hold will bear little relation at all to the value I exchanged when it comes time for me to spend those BTC. So, because it is not a very stable way for me to store wealth, I am less inclined to accept Bitcoin in exchange for the value I sell. Catch 22.
Being “not a good store of value” discourages all but the most ideologically-committed vendors from adopting Bitcoin. Low adoption means that the pool of Bitcoin-enabled trade partners ends up being a very small pond indeed. And a lack of trade partners diminishes the usefulness and usability of the currency. In econospeak, there is a lack of liquidity in the Bitcoin economy, resulting in thin value underpinning the coin, and because there is a low volume of trade using the currency, even relatively small exchanges of Bitcoin can significantly alter its perceived value. Small transactions causing large changes in value is the very definition of volatility, and it is hurting Bitcoin adoption badly.
But are we stuck in this vicious cycle forever? I doubt it.
I can see three possible futures for Bitcoin:

Future 1: Brave New Coin

The technical difficulties with Bitcoin get solved, and hopefully quite soon, otherwise other projects are likely to close the still-open window of opportunity that Bitcoin has due to its primacy as the First Comer.
The problems are primarily:
  • fluctuating and sometimes excessive transaction costs — ideally users should never be confronted with the question of transaction costs at all,
  • unacceptable transaction confirmation times, and
  • abysmal user-interfaces that make transactions error-prone and needlessly difficult.
Solve these and there’s a very good chance that Bitcoin finally begins to take off as a Medium Of Exchange.
Solve these and Bitcoin stands by far the best chance of occupying the core (though certainly not all) of the Electronic Cash space, simply because of brand awareness, primacy and market dominance. And we can thank the “bubble” of late 2017 for much of that…

Future 2: Going Gently Into The Night

The technical difficulties don’t get solved, the Bitcoin identity gets fragmented by all the forks, things remain messy and in the meanwhile some other, newer-generation coin quietly and steadily gains acceptance as a means of payment. Litecoin? Zcash? Monero? And slowly but with a dreadful inevitability, Bitcoin’s dominance slowly wanes into irrelevance and ultimate extinction.
This might be the best outcome — not for Bitcoin or its adherents and believers, but for society at large. Let’s at least acknowledge that Bitcoin is the zeroth-generation of cryptographically-enabled distributed ledger (with all the good things that arise from that). But seldom is Version 0, the Proof-of-Concept, the best solution. Usually it takes us a few iterations to get something right. Just look at the evolution of conventional money for an instructive example!

Future 3: The Gold Standard

The last possibility is that the technical difficulties don’t get solved, Bitcoin never becomes a mass Medium Of Exchange, but instead becomes the internet’s primary Store Of Value: Crypto-Gold, in other words. This is a world in which transaction costs and confirmation times don’t matter. After all, look how much hassle and friction is involved in trading, moving and storing a tonne of physical, real-world Gold!
Bitcoin-as-Gold could easily happen. All it takes is for one or two of the world’s Central Banks to start openly using Bitcoin as part of their toolchest in hedging challenges to their national fiat. Using it as a tool in conducting their core business, in other words. Indeed, I would be surprised if any Central Bank in the world has failed to dabble in Bitcoin at this point, but so far it has only been sticking a toe in the water as a way of understanding this mysterious beast, and we have yet to see any Central Bank openly commit to using Bitcoin as a strategic vehicle. I am not speaking here of those few Central Banks that implementing their own in-house crypto-currency. Those are not true crypto-currencies, though they may derive some strength and advantage from being transacted on an open, unpermissioned and (hopefully) immutable distributed store. No. If it’s issued by a single Authority, then it’s fiat, not crypto-currency, whether the authority is a Central Bank, an airline, or a startup issuing dodgy tokens.
If/when Bitcoin starts getting openly used as a hedging instrument by central banks I would bet on four things following really quickly:
  1. All — or almost all — other Central Banks following suit,
  2. The price of Bitcoin will rise enormously. A million dollars per BTC? Who can say.
  3. Bitcoin’s volatility evaporates overnight, driving the speculators (mostly) out of the market (though not until after they take profit, of course.)
  4. Bitcoin mining gains a new and very substantial set of players — the Central Banks and the BIS of course, because these suddenly have an asset and transaction records to protect.
I part ways with a number of BitCoin’s True Believers in thinking that Bitcoin As Gold is not the worst outcome in the world. Yes, it departs from Satoshi’s Original Vision, but… for many, many reasons, the world needs safe and reliable stores of value.

Wake Up: Time to Choose

What’s it to be? Cash? Gold? Or oblivion and a short paragraph in the history books?Refusing to choose is a choice, too.

23 December 2017

LibreOffice, Linux, Nvidia and OpenGL - A Combination from Hell.

A quick note in the hope that it may save somebody else a little time, stress and trouble.

If you're a LibreOffice user, and you run it under one or other GNU/Linux distribution, and you use the Nvidia Graphics Drivers (rather than some default generic driver), you might be tempted to enable OpenGL rendering for LibreOffice.


For reasons that remain largely occult to me, this particular combination of circumstances causes LibreOffice to crash during startup. (For the C/C++ programmer who might care: It seems that there is a method pointer in the Nvidia driver that is null, so when LibreOffice calls this method during startup,... crashity, crashity, segfault, whump.)

The solution is simple: don't enable OpenGL rendering in LibreOffice. The option is there in among the LibreOffice options: Tools >> Options >> View >> Graphics Output >> Use OpenGL. Leave it well alone.

But what if you have already done that and your LibreOffice instance now refuses to run?

I guess there is some configuration file among LibreOffice's many (to be found in $HOME/.config/libreoffice and depths below) that would allow one to delve in with one's favourite text-editor (i.e. anything but nano) and fix it. I could not find it. In the end I simply removed the entire LibreOffice configuration tree and let it create a new one on the next startup (which was entirely and predictably successful, don'cherknow). After all, I tend to keep customisation as lightweight as I can, so it only takes a couple of minutes to put things back the way I like them, and not much harm done. It takes a fair while to discover all this, though...

Hope this helps someone out there. If you found a better way to fix this issue, I'd love to hear about it!

BTW: This seems to be quite independent of Linux distribution. The most help I found in searching for a fix came from the Mint and Arch communities. I use Kubuntu.

02 January 2017

Viewing the Ad-Driven Internet as a Commons

Here's a thought... 

The Ad-revenue-driven Internet is (yet another example of a) Tragedy of the Commons

The ad-funded website derives some (small, perhaps, but measurable) benefit from placing ads, drives clicks through them with least-cost clickbait, makes some money. The Commons of the Internet, "We, the Readers" carry the cost. Not only the cost of our attention, the time of our lives, but literally the cost of delivery; we pay for the bandwidth and infrastructure needed to get those ads in front of us. So the benefactors of this scheme, the ad-funded websites, the Facebooks and Googles and Twitters and Instagrams, are simply more examples of Exploiters of The Commons, driven to maximise the exploitation in ever-increasing ways (because "if we don't catch those fish, someone else will, so we'd best get there first and fish them the hardest.")

BUT. We all know what happens in every other Race To Eat The Commons... Sooner or later the Commons collapses. The fish get fished out; the grassy pasture becomes the Sahara, the air becomes unbreathable.

It's not the ad-blockers those sites have to fear. Ad-blockers are clearly just a form of immune response, just like the [fish] that keep getting smaller and smaller. They should bless and welcome the rise of the ad-blockers, because the nett effect of those is merely to prolong the life of the Commons.

No, what the ad-revenue sites ought to fear is the ultimate and inevitable Collapse of the Commons. It is hard to see what form that is likely to take, as is the timing. All we can confidently predict is that the end of the ad-driven Internet model is a certainty.

It can't come soon enough.

18 August 2016

A 14-Point Framework for Evaluating Programming Libraries & APIs (Part 2 - Final)

In Part 1 of this write-up, I discussed some of the reasons we might want for developing a clear-cut framework
measuring and evaluating programming APIs, and went on to identify the first 7 of the total 14 dimensions useful
in classifying and comparing different, but "competing" APIs. Here are the remainder...

8. Leverage

The 80/20 rule - can the most commonly-used 20% of a library do 80% of what we'll ever need it to do? (Gson is a great example of getting this right.) Chances are good that we might never need the remaining 20%, but it is good to know that it's there come the day that we hit some corner case where it becomes necessary.

9. Discretion

How well hidden are those implementation details that we should not be concerned with? Is it obvious (or even apparent) which stuff is meant to live on the surface of the API (i.e. is part of its UI, and intended for our use) versus the stuff that we're not meant to mess with. This relates a bit to the Opacity of an API, but deserves to be considered separately. Well structured APIs will have explicit, well-advertised points whereby we can customise or extend the behaviour of the API to cater for our own peculiar corner cases, without having to open the Pandora's Box that is its inner workings. (Some programming languages and environments make this easy, some make it difficult or impossible. This is a context you must take into account when evaluating an API''s "discreteness".) If (environment permitting) we are constantly having pure implementation thrown in our faces, it becomes much more difficult for us to sort out the stuff we need to know from the stuff we don't need to know, or worse, stuff we really should not mess with.

10. Documentation

How good is the documentation? Is it up-to-date or it is describing some historic version of which a whole lot of stuff no longer applies (Volley!) Open-source may count as documentation, but then make damn sure you can actually read/navigate that (See Couchbase as a counter-example!) We seldom have the time to dredge through some open-source project's source (of questionable quality) to figure out what it should be doing or how we should be using it. A pointer to some examples buried deep in a library's source tree, and lacking any form of comment or documentation (hello Bouncy Castle!) is no substitute for adequate documentation.
While we're talking about documentation, beware of the simple Hello World Tutorial! All too often tutorial material is so trivially simple that it is effectively useless for communicating the intent and use of an API. As a showcase for features, these tutorials are a long-winded way to achieve nothing that can't be told in a short bullet-list, and they almost universally make the terrible error of dragging in irrelevant and distracting features simply as a way to show off, as opposed to provide instruction.

11. Support

StackOverflow is not support. How responsive are the devs (or support staff if it's a closed, proprietary API) in the various fora, mailing lists, etc. The availability of paid support is no guarantee of quality. Anybody who has spent an hour listening to telephone muzak at international call rates knows what I'm talking about.

12. Churn

How quickly are new releases made; do they frequently make compatibility-breaking changes?
Some reasonable update frequency is, of course, a good thing. Usually. It means that bugs are getting fixed, performance enhanced and new approaches being embraced. On the other side of the spectrum, changes can come too often, and we become code-followers, on an endless treadmill of adaptive changes in our own code. (Hello Android!)
A word of caution, though: It is not easy to distinguish between an API that is moribund and one that has simply reached a level of maturity that near-eliminates the need for updates. Too many projects fall into a trap of creating updates for the sake of seeming active and healthy, when, really, all they're doing is following a fashion industry.

13. Power-to-Weight Ratio

How much heavy lifting does the API do relative to how hard it is to learn, use, maintain?
Sometimes a library might be pretty difficult to learn because it requires us to learn whole new vocabularies, whole new ways of thinking about the world. Is it worth it? Sometimes the answer will be a clear YES, sometimes a NO, but mostly somewhere in between. If the problem it solves is a pretty trivial one, then it becomes much easier for us to evaluate Power-to-Weight, and we are more likely to demand that the API be correspondingly trivial to use. At the other extreme, a library might solve a pretty hard problem (synchronising data among multiple devices; distributed pretty-much-anything algorithms; concurrency) and so worth investing significant effort to learn how to use.

14. Entanglement

How many sub-dependencies does the library pull in? Does it stand alone, or are you in for a "Maven-downloading-the-entire-Internet" priming-build?
How much of a problem this is for you depends on many, many factors. The most pernicious thing that can happen is that transitive dependencies drag in incompatible (or, at least, different) notions for the same things. I recently saw this in a project where one library pulled in one way of doing JSON marshalling and unmarshalling, a second library pulled in a different subordinate library for doing the same thing, and my own preference was for yet a third library (which had actually been pulled in well before the other two, so I already had plenty of code using it — changing that would have been a pain!) We ended up with three slightly different JsonObject classes, all slightly incompatible. Ugh.

In Conclusion

In evaluating a bunch of libraries or frameworks that claim to solve a particular set of problems you might be facing, it helps to separate out the various elements that make them more or less suited to your circumstance. These elements (or dimensions of measure) may or may not carry similar weight in different situations. You might have some use for applying an arbitrary numerical scale (1-5, fibonacci series,...) to each dimension and assigning scores to each API under consideration. Or you might be content with a fuzzier gut-feel judgement. Spider diagrams might be useful. Spreadsheets, too. Some of the dimensions I consider important might not be for you, and that's OK. The important thing is to evaluate our tool-sets dispassionately and with some set of metrics to guide us.  I hope you find these ones useful.

My thanks to friends and colleagues at Polymorph Systems for review and helpful suggestions. Mistakes and idiocies remain all my own.

15 August 2016

A 14-Point Framework for Evaluating Programming Libraries & APIs (Part 1)

Libraries, network-services, virtual machines, platforms and frameworks, all qualify under the umbrella term "API". Some are simply things to be lived with — if we develop Windows applications, if we write Android or iOS applications, then we're blessed or cursed with certain platform-level givens, and there's not a great deal we can do about them apart from, perhaps, wrapping them behind a facade layer that feels a bit "nicer" — that makes life a little easier for us as developers by providing abstractions that more closely match the abstractions defined in our own applications and hiding layers of complexity that must necessarily be handled, but are uninteresting or distracting from the goals for our own development.
Aside from those "givens" we are faced (almost daily) with choosing other utility libraries and services to make our own development faster, simpler, more reliable, more performant and less repetitive.
This then, brings us to the heart of my topic: What exactly is it that makes one programming interface "nicer" than another? What makes one library "better" than another? Is it the expressive power? How would we go about measuring that? Is it how quickly we can churn out useful code, working correctly? Does popularity and coolness matter? There has to be a better way to measure — if only in a fuzzy and inexact way — whether one library or REST service is better suited to our needs and wants than another, and whether using a particular library might be better or worse than writing our own.
In case it is not already clear, let me emphasize: There is no One True Best API for any given task. Every problem lives in a context — a set of forces pushing and pulling on the boundaries of the solution space, warping the texture of the implementation landscape. Costs, time, expertise and past experience, functional requirements, timing and reliability constraints and, not least, developers' penchant for playing with the newest, shiny technologies — their desire to learn and extend their mastery. So: each and every API we choose to employ (as opposed to those that are forced upon us whether we will or no) must be tested against the problems we are trying to solve and the constraints and forces acting upon us and our application. A particular library may be the "right" answer for one project, but be quite inappropriate for the next one. Our desire for "good architecture"[1] suggests that we should, at least, make those choices consciously and deliberately rather than blindly or reflexively.
In what follows, I suggest some ways we might pick apart the various dimensions we might choose to use in evaluating various competing APIs, identifying 14 dimensions that you might want to consider as evaluation metrics in choosing (or avoiding) an API.
I should emphasize that I consider APIs (along with programming languages, platforms and codebase-hygiene) as primarily a UX problem. These things are all first and foremost user-interfaces for us, as tool-manufacturing humans to use, misuse or abuse. The principal question is, "How likely is it that this tool (API) will lead us astray and into the murky swamp of technical-despair?" versus "To what degree will this tool allow us to write less code, more reliable code, more readable (comprehensible, therefore maintainable) code?"
[1] I refrain from trying to nail down just what constitutes "good architecture" and rely, here, on your own intuition and experience. Suffice to say that it extends well beyond the merely technical concerns and encompasses the human, social and business spheres, too.

1. Surface Area

How many types, methods, configuration items do you have to learn in order to use this thing?
This is not unrelated to the ideas of Function Points as a way to "size" software — it attempts to measure the number of inputs and outputs (since that's what types and configuration items are) and use-cases for those moving parts. The absolute number is not important, since different APIs address problems of many sizes and a wide range of complexity, but it can be useful in comparing APIs that purport to solve the same or similar problem-spaces.

2. Coverage

How much of the topic-area does an API address? Is that what you need?

Does the API do all that you expect it to do? Does it do way more than you need? If the API is functionally incomplete, you will find yourself writing supplementary code to make up its deficiencies. That may be acceptable, but it may spell trouble if the API in question is supposed to be solving some complex or difficult problem (e.g. crypto) but is not sufficiently complete.

The case where an API covers way more territory than we really need is a little more subtle. Given appropriate tooling (not always available in all toolchains or environments) this is not primarily a technical problem (of linking too much object code into an application codebase, resulting in codebase bloat) but a cognition problem. Every part of an API wants to put little hooks into our brains. They call out to us, crying, "Me, me. Pay attention to me!" and we truly cannot afford to give them that time or mental space. Take the Google Guava library (for Java.) I use it on almost every Java project I am part of. But I only make really heavy use of maybe two chunks of what it does — the Preconditions and some annotations (for adding contract-like guarantees to classes) and the Collections (particularly Immutable collections) classes. The rest of the library is mostly surplus cognitive baggage most of the time. I'd be better off with it being in a separate library to be pulled in only when truly needed. Indeed, I have seen projects that end up with as many as three separate definitions of methods like isNullOrEmpty(aString) and checkNotNull(anObject) simply because developers did not want to pull in all of the Guava library in the early stages of their project, then acquired another instance of those methods because some other third-party library made those definitions, and, at the end of the day, they ended up using the Guava library anyway for other reasons. What a mess.

3. Composability

How well does this API play with other libraries and tools?
If an API works in terms of platform-compatible types, it is much more likely to play well with other APIs. If it insists upon introducing and using only its own types, it will be much more difficult for us to force it to play well with the other libraries in our armory — we are sure to find ourselves writing endless boilerplate code converting between custom datatypes. And unit tests for that code. Or not, so hurting our code coverage metrics and creating emotions of despondency and discouragement because clearly we suck at doing The Right Stuff.

4. Modularity

How easily can we break this library into pieces so that we can use just the bits we need?
This is (again) about reducing cognitive load. Does a library allow us to just pick and choose the bits that suit us well, leaving the remainder strictly alone, or does it force us to schlepp in all sorts of sundry other parts of the library that do not touch on the problem we're solving. Some frameworks tend to be really bad at this.

5. Openness

How much is this API a black-box?
Can you tweak the under-the-hood stuff if when you need to without delving into the twisty, slippery innards of the implementation? This is simply the Open-Closed Principle in its essence.

6. Opacity

How well does an API hide the details and complexities of the problem-space? A well-thought-out API will shield us (to an appropriate degree) from the concepts and particularities of the underlying domain it deals with, allowing us to work with concepts the ought to be much closer to our own, more familiar application domain. The types and operations exposed at the surface of the API should reflect something more amenable to adaptation to our own conceptual framework than the underlying problem that it hides and manages.

If a library is not making stuff simpler for us, why bother using it? Does it provide a facade that makes sense in the context of the problem your application is attempting to solve?

7. Accessibility

Can you learn just a little bit of the library and be useful (Vertx), or do you need to learn the whole damn thing before you can (safely) use any of it? (Git)

Accessibility is one of the more important dimensions for thinking about APIs because it means we can tackle the (sometimes daunting) task of learning to use a library truly effectively in little bites, and each little bite that we can chew and swallow gives us an ever-increasing confidence in the library, and an ever-increasing confidence in our own abilities to put it to good use.

I shall continue with the remaining seven dimensions in a follow-up post in a couple of days. This thing is already too long for Internet-attention-spans as it is.
Related Posts Plugin for WordPress, Blogger...