14 April 2018

The Futures of Bitcoin

(Originally published at https://medium.com/@mikro2nd/the-futures-of-bitcoin-eb226927cb94 on 30/3/2018)

(O


Leaving aside all the hype and hyperventilating, the personalities and poison, the shills and snake-oil, what might be The Future of Bitcoin?
First a quick and cursory glance at its past — just to give a little context. The original Bitcoin paper starts out forthrightly:
A purely peer-to-peer version of electronic cash would allow online payments to be sent directly from one party to another without going through a financial institution.
Satoshi’s stated intent was to implement a trust-free system of payments. So far Bitcoin has failed to be this. Instead it has become primarily a vehicle for speculation. Yes, a few hardy evangelists do trade using BTC, but they’re few and far between. Certainly every time I have asked someone to pay me using Bitcoin I was met with something between blank incomprehension and outright hostility.
The reasons are myriad and intertwined, and not particularly interesting to me here and now, save that it helps us to place where Bitcoin is in the landscape of status-quo-challenging innovations. At present — and this implies directly that things may well change in the future, perhaps even the quite near future — at present Bitcoin largely fails to serve as a Medium Of Exchange.
Generally we want a currency to provide, in some measure, the following three functions:
  • Medium of Exchange — a means to facilitate the barter of goods and services while eliminating the disadvantages of direct barter,
  • Store of Value — a way to hoard our wealth while we wait for something to spend it on, and
  • Unit of Account — a measure of how much value we’re storing or exchanging.
Bitcoin also fails as a Store Of Value due to its wild volatility — the very attribute that speculators love so much.
And I certainly know of nobody who uses Bitcoin — or any other cryptocurrency — as a Unit Of Account.
It is these latter two failings that drive the first. If I were a merchant, pricing goods in BTC is problematic. Adoption is not wide enough that I can in turn pay my suppliers, my landlord or my taxes in BTC, so I am tethered to the fiat world, no matter how firmly I may be a crypto-future true believer. Even if I advertise pricing and accept payment in BTC, the real price of my wares is constantly referenced back to fiat — the BTC/fiat exchange rate. And that, as we’ve observed, fluctuates wildly — that damned volatility at work. I suspect this is the main reason we’ve seen a number of vendors exit from BTC pricing and payment rails. Indeed the volatility is such that you’d have to reprice on a minute-by-minute basis, and even then, if a transaction takes more than a second or two to be confirmed, as happens during periods when the Bitcoin network is congested, you’re unlikely to receive the same value you invoiced.
Even assuming I do sell some stuff and get paid for it in BTC, there is every reason to believe that the value of BTC I hold will bear little relation at all to the value I exchanged when it comes time for me to spend those BTC. So, because it is not a very stable way for me to store wealth, I am less inclined to accept Bitcoin in exchange for the value I sell. Catch 22.
Being “not a good store of value” discourages all but the most ideologically-committed vendors from adopting Bitcoin. Low adoption means that the pool of Bitcoin-enabled trade partners ends up being a very small pond indeed. And a lack of trade partners diminishes the usefulness and usability of the currency. In econospeak, there is a lack of liquidity in the Bitcoin economy, resulting in thin value underpinning the coin, and because there is a low volume of trade using the currency, even relatively small exchanges of Bitcoin can significantly alter its perceived value. Small transactions causing large changes in value is the very definition of volatility, and it is hurting Bitcoin adoption badly.
But are we stuck in this vicious cycle forever? I doubt it.
I can see three possible futures for Bitcoin:

Future 1: Brave New Coin

The technical difficulties with Bitcoin get solved, and hopefully quite soon, otherwise other projects are likely to close the still-open window of opportunity that Bitcoin has due to its primacy as the First Comer.
The problems are primarily:
  • fluctuating and sometimes excessive transaction costs — ideally users should never be confronted with the question of transaction costs at all,
  • unacceptable transaction confirmation times, and
  • abysmal user-interfaces that make transactions error-prone and needlessly difficult.
Solve these and there’s a very good chance that Bitcoin finally begins to take off as a Medium Of Exchange.
Solve these and Bitcoin stands by far the best chance of occupying the core (though certainly not all) of the Electronic Cash space, simply because of brand awareness, primacy and market dominance. And we can thank the “bubble” of late 2017 for much of that…

Future 2: Going Gently Into The Night

The technical difficulties don’t get solved, the Bitcoin identity gets fragmented by all the forks, things remain messy and in the meanwhile some other, newer-generation coin quietly and steadily gains acceptance as a means of payment. Litecoin? Zcash? Monero? And slowly but with a dreadful inevitability, Bitcoin’s dominance slowly wanes into irrelevance and ultimate extinction.
This might be the best outcome — not for Bitcoin or its adherents and believers, but for society at large. Let’s at least acknowledge that Bitcoin is the zeroth-generation of cryptographically-enabled distributed ledger (with all the good things that arise from that). But seldom is Version 0, the Proof-of-Concept, the best solution. Usually it takes us a few iterations to get something right. Just look at the evolution of conventional money for an instructive example!

Future 3: The Gold Standard

The last possibility is that the technical difficulties don’t get solved, Bitcoin never becomes a mass Medium Of Exchange, but instead becomes the internet’s primary Store Of Value: Crypto-Gold, in other words. This is a world in which transaction costs and confirmation times don’t matter. After all, look how much hassle and friction is involved in trading, moving and storing a tonne of physical, real-world Gold!
Bitcoin-as-Gold could easily happen. All it takes is for one or two of the world’s Central Banks to start openly using Bitcoin as part of their toolchest in hedging challenges to their national fiat. Using it as a tool in conducting their core business, in other words. Indeed, I would be surprised if any Central Bank in the world has failed to dabble in Bitcoin at this point, but so far it has only been sticking a toe in the water as a way of understanding this mysterious beast, and we have yet to see any Central Bank openly commit to using Bitcoin as a strategic vehicle. I am not speaking here of those few Central Banks that implementing their own in-house crypto-currency. Those are not true crypto-currencies, though they may derive some strength and advantage from being transacted on an open, unpermissioned and (hopefully) immutable distributed store. No. If it’s issued by a single Authority, then it’s fiat, not crypto-currency, whether the authority is a Central Bank, an airline, or a startup issuing dodgy tokens.
If/when Bitcoin starts getting openly used as a hedging instrument by central banks I would bet on four things following really quickly:
  1. All — or almost all — other Central Banks following suit,
  2. The price of Bitcoin will rise enormously. A million dollars per BTC? Who can say.
  3. Bitcoin’s volatility evaporates overnight, driving the speculators (mostly) out of the market (though not until after they take profit, of course.)
  4. Bitcoin mining gains a new and very substantial set of players — the Central Banks and the BIS of course, because these suddenly have an asset and transaction records to protect.
I part ways with a number of BitCoin’s True Believers in thinking that Bitcoin As Gold is not the worst outcome in the world. Yes, it departs from Satoshi’s Original Vision, but… for many, many reasons, the world needs safe and reliable stores of value.

Wake Up: Time to Choose

What’s it to be? Cash? Gold? Or oblivion and a short paragraph in the history books?Refusing to choose is a choice, too.

23 December 2017

LibreOffice, Linux, Nvidia and OpenGL - A Combination from Hell.

A quick note in the hope that it may save somebody else a little time, stress and trouble.

If you're a LibreOffice user, and you run it under one or other GNU/Linux distribution, and you use the Nvidia Graphics Drivers (rather than some default generic driver), you might be tempted to enable OpenGL rendering for LibreOffice.

DON'T DO THIS.

For reasons that remain largely occult to me, this particular combination of circumstances causes LibreOffice to crash during startup. (For the C/C++ programmer who might care: It seems that there is a method pointer in the Nvidia driver that is null, so when LibreOffice calls this method during startup,... crashity, crashity, segfault, whump.)

The solution is simple: don't enable OpenGL rendering in LibreOffice. The option is there in among the LibreOffice options: Tools >> Options >> View >> Graphics Output >> Use OpenGL. Leave it well alone.

But what if you have already done that and your LibreOffice instance now refuses to run?

I guess there is some configuration file among LibreOffice's many (to be found in $HOME/.config/libreoffice and depths below) that would allow one to delve in with one's favourite text-editor (i.e. anything but nano) and fix it. I could not find it. In the end I simply removed the entire LibreOffice configuration tree and let it create a new one on the next startup (which was entirely and predictably successful, don'cherknow). After all, I tend to keep customisation as lightweight as I can, so it only takes a couple of minutes to put things back the way I like them, and not much harm done. It takes a fair while to discover all this, though...

Hope this helps someone out there. If you found a better way to fix this issue, I'd love to hear about it!

BTW: This seems to be quite independent of Linux distribution. The most help I found in searching for a fix came from the Mint and Arch communities. I use Kubuntu.

02 January 2017

Viewing the Ad-Driven Internet as a Commons

Here's a thought... 

The Ad-revenue-driven Internet is (yet another example of a) Tragedy of the Commons

The ad-funded website derives some (small, perhaps, but measurable) benefit from placing ads, drives clicks through them with least-cost clickbait, makes some money. The Commons of the Internet, "We, the Readers" carry the cost. Not only the cost of our attention, the time of our lives, but literally the cost of delivery; we pay for the bandwidth and infrastructure needed to get those ads in front of us. So the benefactors of this scheme, the ad-funded websites, the Facebooks and Googles and Twitters and Instagrams, are simply more examples of Exploiters of The Commons, driven to maximise the exploitation in ever-increasing ways (because "if we don't catch those fish, someone else will, so we'd best get there first and fish them the hardest.")

BUT. We all know what happens in every other Race To Eat The Commons... Sooner or later the Commons collapses. The fish get fished out; the grassy pasture becomes the Sahara, the air becomes unbreathable.

It's not the ad-blockers those sites have to fear. Ad-blockers are clearly just a form of immune response, just like the [fish] that keep getting smaller and smaller. They should bless and welcome the rise of the ad-blockers, because the nett effect of those is merely to prolong the life of the Commons.

No, what the ad-revenue sites ought to fear is the ultimate and inevitable Collapse of the Commons. It is hard to see what form that is likely to take, as is the timing. All we can confidently predict is that the end of the ad-driven Internet model is a certainty.


It can't come soon enough.

18 August 2016

A 14-Point Framework for Evaluating Programming Libraries & APIs (Part 2 - Final)

In Part 1 of this write-up, I discussed some of the reasons we might want for developing a clear-cut framework
measuring and evaluating programming APIs, and went on to identify the first 7 of the total 14 dimensions useful
in classifying and comparing different, but "competing" APIs. Here are the remainder...

8. Leverage

The 80/20 rule - can the most commonly-used 20% of a library do 80% of what we'll ever need it to do? (Gson is a great example of getting this right.) Chances are good that we might never need the remaining 20%, but it is good to know that it's there come the day that we hit some corner case where it becomes necessary.

9. Discretion

How well hidden are those implementation details that we should not be concerned with? Is it obvious (or even apparent) which stuff is meant to live on the surface of the API (i.e. is part of its UI, and intended for our use) versus the stuff that we're not meant to mess with. This relates a bit to the Opacity of an API, but deserves to be considered separately. Well structured APIs will have explicit, well-advertised points whereby we can customise or extend the behaviour of the API to cater for our own peculiar corner cases, without having to open the Pandora's Box that is its inner workings. (Some programming languages and environments make this easy, some make it difficult or impossible. This is a context you must take into account when evaluating an API''s "discreteness".) If (environment permitting) we are constantly having pure implementation thrown in our faces, it becomes much more difficult for us to sort out the stuff we need to know from the stuff we don't need to know, or worse, stuff we really should not mess with.

10. Documentation

How good is the documentation? Is it up-to-date or it is describing some historic version of which a whole lot of stuff no longer applies (Volley!) Open-source may count as documentation, but then make damn sure you can actually read/navigate that (See Couchbase as a counter-example!) We seldom have the time to dredge through some open-source project's source (of questionable quality) to figure out what it should be doing or how we should be using it. A pointer to some examples buried deep in a library's source tree, and lacking any form of comment or documentation (hello Bouncy Castle!) is no substitute for adequate documentation.
While we're talking about documentation, beware of the simple Hello World Tutorial! All too often tutorial material is so trivially simple that it is effectively useless for communicating the intent and use of an API. As a showcase for features, these tutorials are a long-winded way to achieve nothing that can't be told in a short bullet-list, and they almost universally make the terrible error of dragging in irrelevant and distracting features simply as a way to show off, as opposed to provide instruction.

11. Support

StackOverflow is not support. How responsive are the devs (or support staff if it's a closed, proprietary API) in the various fora, mailing lists, etc. The availability of paid support is no guarantee of quality. Anybody who has spent an hour listening to telephone muzak at international call rates knows what I'm talking about.

12. Churn

How quickly are new releases made; do they frequently make compatibility-breaking changes?
Some reasonable update frequency is, of course, a good thing. Usually. It means that bugs are getting fixed, performance enhanced and new approaches being embraced. On the other side of the spectrum, changes can come too often, and we become code-followers, on an endless treadmill of adaptive changes in our own code. (Hello Android!)
A word of caution, though: It is not easy to distinguish between an API that is moribund and one that has simply reached a level of maturity that near-eliminates the need for updates. Too many projects fall into a trap of creating updates for the sake of seeming active and healthy, when, really, all they're doing is following a fashion industry.

13. Power-to-Weight Ratio

How much heavy lifting does the API do relative to how hard it is to learn, use, maintain?
Sometimes a library might be pretty difficult to learn because it requires us to learn whole new vocabularies, whole new ways of thinking about the world. Is it worth it? Sometimes the answer will be a clear YES, sometimes a NO, but mostly somewhere in between. If the problem it solves is a pretty trivial one, then it becomes much easier for us to evaluate Power-to-Weight, and we are more likely to demand that the API be correspondingly trivial to use. At the other extreme, a library might solve a pretty hard problem (synchronising data among multiple devices; distributed pretty-much-anything algorithms; concurrency) and so worth investing significant effort to learn how to use.

14. Entanglement

How many sub-dependencies does the library pull in? Does it stand alone, or are you in for a "Maven-downloading-the-entire-Internet" priming-build?
How much of a problem this is for you depends on many, many factors. The most pernicious thing that can happen is that transitive dependencies drag in incompatible (or, at least, different) notions for the same things. I recently saw this in a project where one library pulled in one way of doing JSON marshalling and unmarshalling, a second library pulled in a different subordinate library for doing the same thing, and my own preference was for yet a third library (which had actually been pulled in well before the other two, so I already had plenty of code using it — changing that would have been a pain!) We ended up with three slightly different JsonObject classes, all slightly incompatible. Ugh.

In Conclusion


In evaluating a bunch of libraries or frameworks that claim to solve a particular set of problems you might be facing, it helps to separate out the various elements that make them more or less suited to your circumstance. These elements (or dimensions of measure) may or may not carry similar weight in different situations. You might have some use for applying an arbitrary numerical scale (1-5, fibonacci series,...) to each dimension and assigning scores to each API under consideration. Or you might be content with a fuzzier gut-feel judgement. Spider diagrams might be useful. Spreadsheets, too. Some of the dimensions I consider important might not be for you, and that's OK. The important thing is to evaluate our tool-sets dispassionately and with some set of metrics to guide us.  I hope you find these ones useful.

My thanks to friends and colleagues at Polymorph Systems for review and helpful suggestions. Mistakes and idiocies remain all my own.

15 August 2016

A 14-Point Framework for Evaluating Programming Libraries & APIs (Part 1)

Libraries, network-services, virtual machines, platforms and frameworks, all qualify under the umbrella term "API". Some are simply things to be lived with — if we develop Windows applications, if we write Android or iOS applications, then we're blessed or cursed with certain platform-level givens, and there's not a great deal we can do about them apart from, perhaps, wrapping them behind a facade layer that feels a bit "nicer" — that makes life a little easier for us as developers by providing abstractions that more closely match the abstractions defined in our own applications and hiding layers of complexity that must necessarily be handled, but are uninteresting or distracting from the goals for our own development.
Aside from those "givens" we are faced (almost daily) with choosing other utility libraries and services to make our own development faster, simpler, more reliable, more performant and less repetitive.
This then, brings us to the heart of my topic: What exactly is it that makes one programming interface "nicer" than another? What makes one library "better" than another? Is it the expressive power? How would we go about measuring that? Is it how quickly we can churn out useful code, working correctly? Does popularity and coolness matter? There has to be a better way to measure — if only in a fuzzy and inexact way — whether one library or REST service is better suited to our needs and wants than another, and whether using a particular library might be better or worse than writing our own.
In case it is not already clear, let me emphasize: There is no One True Best API for any given task. Every problem lives in a context — a set of forces pushing and pulling on the boundaries of the solution space, warping the texture of the implementation landscape. Costs, time, expertise and past experience, functional requirements, timing and reliability constraints and, not least, developers' penchant for playing with the newest, shiny technologies — their desire to learn and extend their mastery. So: each and every API we choose to employ (as opposed to those that are forced upon us whether we will or no) must be tested against the problems we are trying to solve and the constraints and forces acting upon us and our application. A particular library may be the "right" answer for one project, but be quite inappropriate for the next one. Our desire for "good architecture"[1] suggests that we should, at least, make those choices consciously and deliberately rather than blindly or reflexively.
In what follows, I suggest some ways we might pick apart the various dimensions we might choose to use in evaluating various competing APIs, identifying 14 dimensions that you might want to consider as evaluation metrics in choosing (or avoiding) an API.
I should emphasize that I consider APIs (along with programming languages, platforms and codebase-hygiene) as primarily a UX problem. These things are all first and foremost user-interfaces for us, as tool-manufacturing humans to use, misuse or abuse. The principal question is, "How likely is it that this tool (API) will lead us astray and into the murky swamp of technical-despair?" versus "To what degree will this tool allow us to write less code, more reliable code, more readable (comprehensible, therefore maintainable) code?"
[1] I refrain from trying to nail down just what constitutes "good architecture" and rely, here, on your own intuition and experience. Suffice to say that it extends well beyond the merely technical concerns and encompasses the human, social and business spheres, too.

1. Surface Area

How many types, methods, configuration items do you have to learn in order to use this thing?
This is not unrelated to the ideas of Function Points as a way to "size" software — it attempts to measure the number of inputs and outputs (since that's what types and configuration items are) and use-cases for those moving parts. The absolute number is not important, since different APIs address problems of many sizes and a wide range of complexity, but it can be useful in comparing APIs that purport to solve the same or similar problem-spaces.

2. Coverage

How much of the topic-area does an API address? Is that what you need?

Does the API do all that you expect it to do? Does it do way more than you need? If the API is functionally incomplete, you will find yourself writing supplementary code to make up its deficiencies. That may be acceptable, but it may spell trouble if the API in question is supposed to be solving some complex or difficult problem (e.g. crypto) but is not sufficiently complete.

The case where an API covers way more territory than we really need is a little more subtle. Given appropriate tooling (not always available in all toolchains or environments) this is not primarily a technical problem (of linking too much object code into an application codebase, resulting in codebase bloat) but a cognition problem. Every part of an API wants to put little hooks into our brains. They call out to us, crying, "Me, me. Pay attention to me!" and we truly cannot afford to give them that time or mental space. Take the Google Guava library (for Java.) I use it on almost every Java project I am part of. But I only make really heavy use of maybe two chunks of what it does — the Preconditions and some annotations (for adding contract-like guarantees to classes) and the Collections (particularly Immutable collections) classes. The rest of the library is mostly surplus cognitive baggage most of the time. I'd be better off with it being in a separate library to be pulled in only when truly needed. Indeed, I have seen projects that end up with as many as three separate definitions of methods like isNullOrEmpty(aString) and checkNotNull(anObject) simply because developers did not want to pull in all of the Guava library in the early stages of their project, then acquired another instance of those methods because some other third-party library made those definitions, and, at the end of the day, they ended up using the Guava library anyway for other reasons. What a mess.

3. Composability

How well does this API play with other libraries and tools?
If an API works in terms of platform-compatible types, it is much more likely to play well with other APIs. If it insists upon introducing and using only its own types, it will be much more difficult for us to force it to play well with the other libraries in our armory — we are sure to find ourselves writing endless boilerplate code converting between custom datatypes. And unit tests for that code. Or not, so hurting our code coverage metrics and creating emotions of despondency and discouragement because clearly we suck at doing The Right Stuff.

4. Modularity

How easily can we break this library into pieces so that we can use just the bits we need?
This is (again) about reducing cognitive load. Does a library allow us to just pick and choose the bits that suit us well, leaving the remainder strictly alone, or does it force us to schlepp in all sorts of sundry other parts of the library that do not touch on the problem we're solving. Some frameworks tend to be really bad at this.

5. Openness

How much is this API a black-box?
Can you tweak the under-the-hood stuff if when you need to without delving into the twisty, slippery innards of the implementation? This is simply the Open-Closed Principle in its essence.

6. Opacity

How well does an API hide the details and complexities of the problem-space? A well-thought-out API will shield us (to an appropriate degree) from the concepts and particularities of the underlying domain it deals with, allowing us to work with concepts the ought to be much closer to our own, more familiar application domain. The types and operations exposed at the surface of the API should reflect something more amenable to adaptation to our own conceptual framework than the underlying problem that it hides and manages.

If a library is not making stuff simpler for us, why bother using it? Does it provide a facade that makes sense in the context of the problem your application is attempting to solve?

7. Accessibility

Can you learn just a little bit of the library and be useful (Vertx), or do you need to learn the whole damn thing before you can (safely) use any of it? (Git)

Accessibility is one of the more important dimensions for thinking about APIs because it means we can tackle the (sometimes daunting) task of learning to use a library truly effectively in little bites, and each little bite that we can chew and swallow gives us an ever-increasing confidence in the library, and an ever-increasing confidence in our own abilities to put it to good use.

I shall continue with the remaining seven dimensions in a follow-up post in a couple of days. This thing is already too long for Internet-attention-spans as it is.

03 March 2016

Extraordinary, Driven, Passionate, Imaginative, Remarkable, Phenomenal, Seasoned Developer Wanted. Apply within.

Recently spotted on the Internet:

Open Positions

  • Extraordinary DevOps Leader
  • Driven JavaScript / Node.js Developer
  • Remarkable PHP Engineer
  • Imaginative iOS Developer
  • Passionate UX/UI Designer
  • Seasoned Python Developer
(Really. I didn't make these up.)

I wondered if I'd qualify for any of those, if I have the necessary qualities, even supposing I possessed the requisite hard skills...

Extraordinary?

No, I don't think I am extraordinary. Most humans are not extraordinary. Most humans are pretty ordinary. That said, I've certainly done some unusual things. The most unusual was probably dropping out of my corporate programming and design job, with all its advantages of good income, city lifestyle and boringboringboringboring to live out in the sticks on a rural smallholding, trying to be self-sufficientish, growing my own food, supplying my own water, brewing my own beer, and learning. Always learning...

Driven?

Yes, I confess, I have been driven.

Back in late 1999 into 2000 I did a gig that involved me living and working in Switzerland for about six months, on and off. Not in Switzerland as much as in one particular Canton that has (I was told) specific IP treaties with various other parts of the world that were (I was told) essential to the success of The Venture.

I was accommodated in particularly upmarket lodgings -- an apartment in the same building that housed the offices and personal home-away-from-home of one of the Money Principals of the venture -- one of the 0.0001%. Lovely views of the lake, famous Alpine peaks visible in the distance, clouds permitting. I had some lingering contractual obligations back in Cape Town that required me to commute between Switzerland and SA every fortnight. Business Class, naturally, at the expense of The Venture. Man, I accumulated a lot of frequent-flier miles that way.

Each time I landed back in Zurich, The Venture's minions would arrange for a taxi to schlepp me from the airport to the office. And what a taxi. Not some grotty yellow clapper with slightly sticky seats and cigarette-infused upholstery, oh no! A Mercedes limo, all leather and walnut, and Herr Geissler's pidgin English offering to take me via the scenic route as I reclined in luxury in the back watching through the blacked-tinted windows as the chocolate-box chalets went whooshing by. The trip from airport to office took around forty minutes, unless we took the scenic route.

One time, though, something went wrong, and the minions failed to arrange for Herr Geissler. Some communication breakdown. By now quite familiar with the ins-and-outs of Swiss travel, I simply took the train. There's a station right beneath your feet at Kloten airport Terminal B. To my delight I discovered that, even with a change of trains at Zurich Hauptbahnhof, the trip took only twenty-two scenic minutes by rail and Swiss chronometer, and deposited me a pleasant, three-minute walk from the office. After that I took the train whenever I was given the choice. I never commuted with Herr Giessler again, except for one last, wild time. But that's subject matter for another story altogether.

So: Driven? Yes, I've been driven. And I prefer the train, thank you.

Passionate?

Yes, I sometimes get passionate with my wife... occasionally I've been tempted to get passionate with other people. It has generally not ended well.

The thing about Passion is that it's all hot-bloodedness, sweaty palms, thumping heart, furious emotion and throbbing other bits. And bloody short-lived. Is that really what you're looking for in an Android Developer? Or would you rather hire someone who will see the project through the inevitable rough patches where your client suddenly and unreasonably invokes the corporate lawyers' jotts and tittles and throws a cast-iron spike through the limpid clarity of your gifted UX designer's heaven-inspiring vision, rewriting the spec into something dreamed up by the by-blow offspring of Dante Alighieri and H P Lovecroft on bad acid after a hard night jamming black-metal and burning Norwegian churches?

If that's really the sort of Passion you're looking for, I think I'll pass.

Imaginative?

Is there a single human-being on this Earth, of average intelligence or better, who is not imaginative? Just watch a small child persuading its parents that, No, I am NOT tired, I do NOT need to go to bed just now. Hell, even my dogs are imaginative when they're trying to persuade me to take them for a walk.

Then, too, I have been known to make claims of being a Writer (I don't say Published) of Science Fiction stories. That certainly takes some imagination. But then I had lots of practice. I learned from a Master. I went to a really strict Boys' High School, you see, along with my best mate, Roy. And Roy was one of the Naughty Boys. Constantly getting into trouble with the powers that were. More than half the time it was not even his fault and he was merely the unwitting victim of circumstance. He had some sort of genetic predisposition towards attracting trouble to himself. Consequently he became a Master at Talking His Way Out Of Trouble, and, along the way, I learned a whole lot from him about the art of fabricating stories. I learned, too, that the stories often don't need to be particularly plausible. Just good enough for the people who want to be able to pretend to believe. And that's all it takes for Science Fiction.

So: Imaginative? Yes, I think I'll own that one.

Remarkable?

I am never quite sure of the word "remarkable". Does it mean that I've done something that other people find odd enough to want to remark on, or does it mean that I am "able" to make "remarks" about odd stuff, thus being remark-able? I suspect (duh) that when you say you're looking for a Remarkable Engineer you mean the first sense -- you want someone who is "remark"-worthy. The trouble is that you don't say what they should be remarkable for... is it their dress-sense, the oddly clashing colours, pink shirt and purple neckscarf topping olive drab pants rolled up to the knees that they think make some sort of declaration of disdain for the world of conventional fashion and the sheer quantity of metalwork rivetted through the flesh of one ear so as to cause their head to lean markedly to one side? I'd certainly remark on that.

Not that I'm pointing a finger, mind. Not me. Very tolerant, I am. Peoples lifestyle and dress choices are their own, and frequently the least interesting thing about them. After all, it is pretty certain that I do or have done some things in my time that might give other people cause to remark on me. Like the sleeveless Afghan goatskin I affected back in the earliest of my student days. At least until the blackened pinhole burns multiplied like some ebonite ur-acne. That was probably remarkable. At least, the smell probably was.

So: Remarkable? Yeah, I'll 'fess up to that one, too.

Phenomenal?

Well, "phenomenal" is the adjectival form of "phenomenon", which one dictionary defines as "an appearance or immediate object of awareness in experience; a thing as it appears to and is constructed by the mind, as distinguished from a noumenon, or thing-in-itself."

I have long held the theory that the practice of software development (writing programs, to be less pretentious) is exactly the Practice of Magic. I mean, look at the Sorcerers and Wizards of fable and fantasy. (It is entirely Terry Pratchett's fault that I always want to spell Wizard with two Zs.) No, seriously, I am not even joking, here. The Sorcerers and/or Wizards (delete where not applicable) confine themselves to smoky dungeons/high garrets/dark towers, surrounded by piled-high stacks of grimoires, crafting intricate and eldritch spells in arcane and incomprehensible languages. Enchantments that, upon release into the world, wreak havoc, mayhem and general confusion. (I believe we call it "Disrupting Entrenched Monopolies".)

Remind me again: What's the difference between programming and sorcery? And Javascript really is pretty cryptic and arcane, isn't it. I wonder what DevOps opportunities there might be at Hogwarts, and do they do Agile at Mordor. (I doubt it.)

So I contend that software is all "a thing as it appears to and is constructed by the mind". By that measure, then, all software is Phenomenal. So, in using the word "phenomenal" in relation to writing code, you said nothing at all.

Seasoned?

Seasoned? What does that even mean -- a "seasoned" developer? That I've been left out in the Sun too long? That I smell a bit ripe? I don't want to know.

I can chuck some salt and pepper over myself if you like. Chilli flakes, even. MSG. Is that seasoned enough?

Finally?

Finally!

While it was not specific to any of the positions you advertised, in puffing up your dev group you invite me to "work with an absurdly talented group of people..."

I think I'll join the circus then. The Bearded Lady. The Dancing Bear. The Siamese Kittens. The Sword Swallower and the Tattooed Map Lady whose Ass Can Be Seen From Her Elba. Absurd and talented, all. Maybe they need a Scrum Master...

19 March 2015

Turning URLs into URIs: shrtn - A URL redirector/shortener.

The shutting down by Google of their project-hosting has forced me to migrate my URL-shortener project to BitBucket (because Mercurial is so much nicer than GIT), and along the way caused me to take it upon myself to resuscitate the project. For one thing, I've passed out URLS that rely on it residing at 1.mikro2nd.net, and, as things currently stand, those are dead links. For another, I've figured out a second, more important, use case for URL redirectors than mere shortening. 

While shortening URLS may have some (highly debatable) utility, I consider them to be, on balance, a harmful thing. They make the link destination opaque, and the poor sucker clicking on them has absolutely no idea where they might end up. Then, too, the indirection allows the redirector host to potential introduce any sorts of mischief or stupidity into the user's browse path. These are not insurmountable problems, just small stumbling blocks in the path of the concept, but they do argue for considering URL-shortening to be harmful. Or at least seldom accruing benefit to the clicker.

What they do buy you is the ability to gather ego-gratifying statistics on who clicked your links, which can give you a measure of how well your voice is heard in the social Internet. An oft misleading measure, to be sure, but sometimes some measure is better than none, as long as the user of said statistics remains aware of their limitations and biases.

There is, though, a very legitimate and compelling use for the idea of redirectors: They provide a bridge between the world of URIs and the world of URLs. (For the longest time I was a bit hazy on the distinction between the two, but I'm better, now, thank you.) A shortener (or redirector) service allows us to publish identifiers of stuff (I'm trying to avoid the "content" word having developed a nasty allergy) to the 'net without worrying about the location or address of that stuff. For example, I might have published a document (or an application or a collection of photographs or whatever) using some file-hosting service. Let's call it odearieme.com. Now it turns out that odearieme.com was also hosting a bunch of stuff that the American copyright fascists decide they want disappeared. So they have the FBI kick down the doors of ohdearieme's hosting centre with the aid of a compliant country's spook services, and they steal remove the servers holding your perfectly legitimate and legal "stuff". Too bad. Anybody hanging on to the URL you gave them referring to ohdearieme.com is now stock out of luck. Had you used a redirector, though, it would be no problem. All you'd have to do is upload a copy of your "stuff" to a new file hosting service ad change the destination URL in you redirector. This is, indeed, as useful thing, and, after all, exactly what they Domain Name System is all about.

So there's my compelling use-case for a URL shortener/redirector, and I still have a couple of hundred words to write to reach my day's target. Let me describe, then, some of my other thinking around this little project.

I've already implemented two or three different storage schemes for the "database" that the server needs in order to work, and exactly which one gets used for a given deployment is merely a configuration issue. So what's another one? The trouble is that it is becoming cumbersome to include all the storage implementations, along with their dependencies, in the final deployed product. I know that many, if not most, developers would just chuck everything and the bathtub into the deployment, but it offends my sense of neatness. I want to build a number of separate artefacts: one that contains the actual redirector server, and then one for each storage scheme. That way a deployer (or, heavens forfend, an actual System Administrator) can deploy only the exact artefacts they need. This also means that updates to one module don't necessitate a refresh of the entire system. More immediately, it means that I want to build a number of separate artefacts for this project, rather than a simple WAR file, making Maven a much better fit than the straightforward Ant build generated by Netbeans, so I'm having to (at last) learn something about using Maven properly. I guess I'll learn to live with the tediously slow builds, though it does feel like the 1990's called and want their Makefiles back.

Then, too, I'm of a mind to host this thing (at least my own personal redirector) on Google's massive cloudy infrastructure. Given the meagre volumes of data I'll be shifting (which, presumably, speaks volumes for the paucity of my Social Internet Fu) I'm pretty-well certain I can keep hosting it there free for all eternity—or until Google decide to shut down their cloudy hosting engines or make them for-pay only. If or when that happens I guess I'll have to move the redirector back onto my own infrastructure, so I don't want to lose the ability to deploy to my own (Tomcat) application server. That means teasing out the deployment configuration and infrastructure-specific stuff from the core of the redirector code itself. All sounds reasonably doable for me, and I'm using the exercise to polish up my Mercurial, Maven and AppEngine skills, not to mention brushing up on changes in the Java language and APIs. I might even use it to improve my Javascript skills or get properly to grips with one of the JS front-end frameworks.
Related Posts Plugin for WordPress, Blogger...