23 December 2017

LibreOffice, Linux, Nvidia and OpenGL - A Combination from Hell.

A quick note in the hope that it may save somebody else a little time, stress and trouble.

If you're a LibreOffice user, and you run it under one or other GNU/Linux distribution, and you use the Nvidia Graphics Drivers (rather than some default generic driver), you might be tempted to enable OpenGL rendering for LibreOffice.

DON'T DO THIS.

For reasons that remain largely occult to me, this particular combination of circumstances causes LibreOffice to crash during startup. (For the C/C++ programmer who might care: It seems that there is a method pointer in the Nvidia driver that is null, so when LibreOffice calls this method during startup,... crashity, crashity, segfault, whump.)

The solution is simple: don't enable OpenGL rendering in LibreOffice. The option is there in among the LibreOffice options: Tools >> Options >> View >> Graphics Output >> Use OpenGL. Leave it well alone.

But what if you have already done that and your LibreOffice instance now refuses to run?

I guess there is some configuration file among LibreOffice's many (to be found in $HOME/.config/libreoffice and depths below) that would allow one to delve in with one's favourite text-editor (i.e. anything but nano) and fix it. I could not find it. In the end I simply removed the entire LibreOffice configuration tree and let it create a new one on the next startup (which was entirely and predictably successful, don'cherknow). After all, I tend to keep customisation as lightweight as I can, so it only takes a couple of minutes to put things back the way I like them, and not much harm done. It takes a fair while to discover all this, though...

Hope this helps someone out there. If you found a better way to fix this issue, I'd love to hear about it!

BTW: This seems to be quite independent of Linux distribution. The most help I found in searching for a fix came from the Mint and Arch communities. I use Kubuntu.

02 January 2017

Viewing the Ad-Driven Internet as a Commons

Here's a thought... 

The Ad-revenue-driven Internet is (yet another example of a) Tragedy of the Commons

The ad-funded website derives some (small, perhaps, but measurable) benefit from placing ads, drives clicks through them with least-cost clickbait, makes some money. The Commons of the Internet, "We, the Readers" carry the cost. Not only the cost of our attention, the time of our lives, but literally the cost of delivery; we pay for the bandwidth and infrastructure needed to get those ads in front of us. So the benefactors of this scheme, the ad-funded websites, the Facebooks and Googles and Twitters and Instagrams, are simply more examples of Exploiters of The Commons, driven to maximise the exploitation in ever-increasing ways (because "if we don't catch those fish, someone else will, so we'd best get there first and fish them the hardest.")

BUT. We all know what happens in every other Race To Eat The Commons... Sooner or later the Commons collapses. The fish get fished out; the grassy pasture becomes the Sahara, the air becomes unbreathable.

It's not the ad-blockers those sites have to fear. Ad-blockers are clearly just a form of immune response, just like the [fish] that keep getting smaller and smaller. They should bless and welcome the rise of the ad-blockers, because the nett effect of those is merely to prolong the life of the Commons.

No, what the ad-revenue sites ought to fear is the ultimate and inevitable Collapse of the Commons. It is hard to see what form that is likely to take, as is the timing. All we can confidently predict is that the end of the ad-driven Internet model is a certainty.


It can't come soon enough.

18 August 2016

A 14-Point Framework for Evaluating Programming Libraries & APIs (Part 2 - Final)

In Part 1 of this write-up, I discussed some of the reasons we might want for developing a clear-cut framework
measuring and evaluating programming APIs, and went on to identify the first 7 of the total 14 dimensions useful
in classifying and comparing different, but "competing" APIs. Here are the remainder...

8. Leverage

The 80/20 rule - can the most commonly-used 20% of a library do 80% of what we'll ever need it to do? (Gson is a great example of getting this right.) Chances are good that we might never need the remaining 20%, but it is good to know that it's there come the day that we hit some corner case where it becomes necessary.

9. Discretion

How well hidden are those implementation details that we should not be concerned with? Is it obvious (or even apparent) which stuff is meant to live on the surface of the API (i.e. is part of its UI, and intended for our use) versus the stuff that we're not meant to mess with. This relates a bit to the Opacity of an API, but deserves to be considered separately. Well structured APIs will have explicit, well-advertised points whereby we can customise or extend the behaviour of the API to cater for our own peculiar corner cases, without having to open the Pandora's Box that is its inner workings. (Some programming languages and environments make this easy, some make it difficult or impossible. This is a context you must take into account when evaluating an API''s "discreteness".) If (environment permitting) we are constantly having pure implementation thrown in our faces, it becomes much more difficult for us to sort out the stuff we need to know from the stuff we don't need to know, or worse, stuff we really should not mess with.

10. Documentation

How good is the documentation? Is it up-to-date or it is describing some historic version of which a whole lot of stuff no longer applies (Volley!) Open-source may count as documentation, but then make damn sure you can actually read/navigate that (See Couchbase as a counter-example!) We seldom have the time to dredge through some open-source project's source (of questionable quality) to figure out what it should be doing or how we should be using it. A pointer to some examples buried deep in a library's source tree, and lacking any form of comment or documentation (hello Bouncy Castle!) is no substitute for adequate documentation.
While we're talking about documentation, beware of the simple Hello World Tutorial! All too often tutorial material is so trivially simple that it is effectively useless for communicating the intent and use of an API. As a showcase for features, these tutorials are a long-winded way to achieve nothing that can't be told in a short bullet-list, and they almost universally make the terrible error of dragging in irrelevant and distracting features simply as a way to show off, as opposed to provide instruction.

11. Support

StackOverflow is not support. How responsive are the devs (or support staff if it's a closed, proprietary API) in the various fora, mailing lists, etc. The availability of paid support is no guarantee of quality. Anybody who has spent an hour listening to telephone muzak at international call rates knows what I'm talking about.

12. Churn

How quickly are new releases made; do they frequently make compatibility-breaking changes?
Some reasonable update frequency is, of course, a good thing. Usually. It means that bugs are getting fixed, performance enhanced and new approaches being embraced. On the other side of the spectrum, changes can come too often, and we become code-followers, on an endless treadmill of adaptive changes in our own code. (Hello Android!)
A word of caution, though: It is not easy to distinguish between an API that is moribund and one that has simply reached a level of maturity that near-eliminates the need for updates. Too many projects fall into a trap of creating updates for the sake of seeming active and healthy, when, really, all they're doing is following a fashion industry.

13. Power-to-Weight Ratio

How much heavy lifting does the API do relative to how hard it is to learn, use, maintain?
Sometimes a library might be pretty difficult to learn because it requires us to learn whole new vocabularies, whole new ways of thinking about the world. Is it worth it? Sometimes the answer will be a clear YES, sometimes a NO, but mostly somewhere in between. If the problem it solves is a pretty trivial one, then it becomes much easier for us to evaluate Power-to-Weight, and we are more likely to demand that the API be correspondingly trivial to use. At the other extreme, a library might solve a pretty hard problem (synchronising data among multiple devices; distributed pretty-much-anything algorithms; concurrency) and so worth investing significant effort to learn how to use.

14. Entanglement

How many sub-dependencies does the library pull in? Does it stand alone, or are you in for a "Maven-downloading-the-entire-Internet" priming-build?
How much of a problem this is for you depends on many, many factors. The most pernicious thing that can happen is that transitive dependencies drag in incompatible (or, at least, different) notions for the same things. I recently saw this in a project where one library pulled in one way of doing JSON marshalling and unmarshalling, a second library pulled in a different subordinate library for doing the same thing, and my own preference was for yet a third library (which had actually been pulled in well before the other two, so I already had plenty of code using it — changing that would have been a pain!) We ended up with three slightly different JsonObject classes, all slightly incompatible. Ugh.

In Conclusion


In evaluating a bunch of libraries or frameworks that claim to solve a particular set of problems you might be facing, it helps to separate out the various elements that make them more or less suited to your circumstance. These elements (or dimensions of measure) may or may not carry similar weight in different situations. You might have some use for applying an arbitrary numerical scale (1-5, fibonacci series,...) to each dimension and assigning scores to each API under consideration. Or you might be content with a fuzzier gut-feel judgement. Spider diagrams might be useful. Spreadsheets, too. Some of the dimensions I consider important might not be for you, and that's OK. The important thing is to evaluate our tool-sets dispassionately and with some set of metrics to guide us.  I hope you find these ones useful.

My thanks to friends and colleagues at Polymorph Systems for review and helpful suggestions. Mistakes and idiocies remain all my own.
Related Posts Plugin for WordPress, Blogger...