Showing posts with label networking. Show all posts
Showing posts with label networking. Show all posts

13 September 2010

Setting up a PPTP VPN with KDE NetworkManager

Filed under "Notes to Myself". If this helps someone else out there, Good!

The problem: to VPN into a closed Microsoft-dominated network.

After 6 weeks of hacking at it, the client's network administrator finally managed to get the VPN set up on their office server (some version of Windows is involved, so no wonder it is an opaque and difficult process taking weeks and involving numerous reboots. I am frequently moved to wonder whether people actually enjoy the pain that results from using Microsoft software... I can't think of any other reason to use it.)

So it helps to have the admin tell you:
  • the gateway address for the VPN
  • your username and password
More importantly for a n00b to VPNs (i.e. me) it help to get told that
  • the VPN protocol is PPTP (MS proprietary AFAICT) and
  • that it requires some (MS peculiar) encrytion scheme (MPPE) to be used.
Surprise, surprise! Only took a day to figure these things out.

The rest of the trouble comes from Kubuntu Linux insisting on using the fucked-up awful NetworkManager. I could not find reliable/working information on setting up the correct config by hand, so was forced to rely on NM. Also tried Kvpnc, but could not make it work for the client network configuration.

NM insists on setting the default route for all network traffic to be via the VPN client network. Not what I want. I need on-going access to my own local network resources as well as the VPN resources (as well as my own internet connection) as I am developing stuff that relies on local resources to work. After starting the VPN, my machine's routing table looks like

Destination     Gateway         Genmask         Flags Metric Ref    Use Iface
41.133.194.199  192.168.1.254   255.255.255.255 UGH   0      0        0 eth0
41.133.194.199  192.168.1.254   255.255.255.255 UGH   0      0        0 eth0
192.168.0.23    0.0.0.0         255.255.255.255 UH    0      0        0 ppp0
192.168.1.0     0.0.0.0         255.255.255.0   U     1      0        0 eth0
169.254.0.0     0.0.0.0         255.255.0.0     U     1000   0        0 eth0
0.0.0.0         0.0.0.0         0.0.0.0         U     0      0        0 ppp0
(192.168.1.0/24 is my own local net; 192.168.0.0/24 is the client's network.)

Note that last line. There's the troublemaker. I don't want all traffic routed to the VPN by default. I tried every possible combination of settings in the KNetworkManager applet, especially those that claim to prevent the VPN from overriding the automatic routing. I tried manually setting all the VPN info (IP address, netmasks, etc.) but that fails to work either.

Ultimately I resorted to a workaround. Accept the crappy routing that NM sets up for me, then fiddle with the routing tables by hand:
$ sudo route del -net 0.0.0.0 ppp0
$ sudo route add -net 0.0.0.0 netmask 0.0.0.0 gw 192.168.1.254 dev eth0
These 2 lines get me a sensible default route outta here, and
$ sudo route add -net 192.168.0.0 netmask 255.255.255.0 dev ppp0
gets me a route to all the client-network resources (albeit without any DNS lookups for their subdomain; this I can live without, since there are only a small handful of machines I need access to.)

The resulting routing table:
Destination     Gateway         Genmask         Flags Metric Ref    Use Iface
41.133.194.199  192.168.1.254   255.255.255.255 UGH   0      0        0 eth0
41.133.194.199  192.168.1.254   255.255.255.255 UGH   0      0        0 eth0
192.168.0.23    0.0.0.0         255.255.255.255 UH    0      0        0 ppp0
192.168.1.0     0.0.0.0         255.255.255.0   U     1      0        0 eth0
192.168.0.0     0.0.0.0         255.255.255.0   U     0      0        0 ppp0
169.254.0.0     0.0.0.0         255.255.0.0     U     1000   0        0 eth0
0.0.0.0         192.168.1.254   0.0.0.0         UG    0      0        0 eth0

Can't say it's pretty, but it works.

10 September 2009

Network Disasters Happen in Threes Fours

Only 9 more sleeps to go...

Strike 1: Last week, disaster struck in the form of a 2-day DSL outage. Telkom -- my current DSL provider -- blithely went and cleared the fault ticket after 24hours -- without any consultation with me -- because their test centre said that my router was getting a connection to the local exchange. The ticket comment was, "Fault closed at customer request." Liars!

Much wailing and gnashing of teeth later, they discovered that a whole lot of people in the area were experiencing the same problems: very sporadic connectivity with almost no traffic getting through. Turned out to be a fault on the exchange itself.

Strike 2: On the same day that this problem started, my London-housed server went down for Reasons Unknown. Of course I was blissfully unaware of it until Friday. 2 days of server outage. Then my service provider there was terribly slow to rectify the problem (which -- as the universe will insist upon -- involved a fractal nesting of sub-problems with their own sub-sub-problems ad mandelbrot.) As I write, the server is still only partially up. Apache service -- the one my paying customers rely on -- is up and working fine on one IP address, but Tomcat, hosting my personal and "corporate" sites, blogs and wikis still cannot talk through the other IP address. The service I get from VAServ is pretty kak, but not so kak as to be noteworthy -- they really are giving me a very low-cost package, and, as always, You Gets What You Pays For.

Strike 3: At the same time an FTP backup service I use for offsite backups refused to authenticate me, using the same credentials I've been using for years, with the result that I could not even ensure the safety of all the data! Be Still, My Twitching Ulcer.

You're Out: Welcome to this morning, where we present -- for your entertainment and edification -- a reprise of last week's DSL outage. Telkom, predictably, and once again, are in complete denial that there is actually a problem. Their tests show a solid connection between my router and their exchange. No shit, Sherlock! Pity the bits can't squeeze through the tiny opening.

I've always been very happy with the ISP service I've received from WebAfrica, and have, over the years, put many friends and colleagues onto them, not one of whom has had anything less than Sterling service. I've asked WA to take over my DSL service1, too, in light of recent events. The only bit of business Telkom will be getting from me for the foreseeable future will be POTS.

Only 9 more sleeps to go...2

I think this sorry whining actually has a point; it tends to back up my long-held belief that telcos are constitutionally incapable of competently running IP services. The cultures and philosophies that make end-to-end controlled networks are unable to comprehend -- in some weirdly deep, DNA-level way -- how to cope with IP networks which have almost no intelligence in the middle, but live, instead, with all the intelligence at the edges.

People who run IP networks, on the other hand, are able to provide perfectly adequate voice services over IP, which is why they're going to eat the telcos' lunch over the long term.

[1] There's no transfer/installation fee. Their monthly rates are at present the same as Telkom's, and as they roll out their own infrastructure, they anticipate reducing the charges. Their support desk is outstanding, staffed by people who actually know stuff, don't mind admitting mistakes and problems, treat customers like Real Humans instead of problem-id's, and follow through on promises and commitments and ensuring that things get fixed. I can't see any downside, can you?


[2] Of course, it occurs to me a little late, that I'll only be able to actually post this when I get the server working properly again. Which will only happen when I get some reasonable connectivity back. Which might happen slightly after Lucifer goes skiing from his front doorstep.

03 July 2008

Power to the Purple

A week of power-supply problems. Not Eskom's fault, this time, but more localised failures.

First the power-supply for the network server had a fan stop turning. I could have taken the chance on the unit working without cooling, since it is relatively lightly loaded -- no graphics cards, only a single disk -- but, since I had a spare power-supply unit handy it was a task of mere minutes to swap the faulty unit out and get the server back into action. It is a fairly key piece of our little home network, being a web-cache, local domain-name server and cache, Subversion repository and file-share space, so we miss it badly when it is down.

Then the power supply on my desktop machine decided to follow suit. Also a fan failure. I hate those crappy little fans! There's absolutely nothing wrong with the basic electronics of the power supply itself, but the ball bearings in the fan have died. Pricing for a new power-supply runs from a little over R100 if I were in Cape Town with easy access to wholesalers, through R200 from a web-shop, all the way to R300 from the local PC shops! This is for the most basic 350W PSU -- none of that fancy gaming-machine stuff for me. (Though I will confess to being tempted by a unit costing around R800, simply because it is alleged to be completely quiet! I'm a self-confessed anti-noise-maniac.)

My guess is I'm going to spend an hour messing about with the soldering iron, installing new fans (I have a couple just lying about) in the "faulty" power-supplies.

At the same time, several warnings from my server-supplier in London telling the story of a week-long tail-of-woe about power-supply into the datacentre. Apparently a failover switch failed to work correctly during a power-outage last Sunday, causing the battery-based UPS to take the entire load for about 10 minutes before the batteries were totally drained. All servers in the DC went down hard. It has taken them until Thursday to isolate the problem and replace the parts (electrical and mechanical) that were at fault.

During the whole affair, all server owners have been kept fully informed via RSS feeds and emails at every step of the way, since there is a risk (however slight) that servers might go down if there is a power-grid outage again and the on-site staff -- now fully briefed on managing a manual switch from grid power to the backup generator -- should get taken-up at just the wrong moment.

This is exactly the sort of thing I expect from server providers and datacentre operators. Everybody understand that, despite the best-laid plans, sometimes shit happens. It is how they respond, and how transparent and communicative they are in responding to the crisis that truly matters.

This is in very sharp contrast to Verizon's datacentre in Durban, where my other client's servers are housed. About 10 days ago they had some electrical work going on in the DC, which in turn made some server-moves necessary. They did all this without warning their clients that there might be some risk to their operations. Needless to say, my client's servers went down without warning in the wee hours of Sunday morning. No heartbeat monitoring in place, so it was Monday before anybody knew that something was wrong. No peep from Verizon to their customers. Half-arsed, I call it.

There's a lesson in all this about Single Points of Failure. I've been warning for over 8 months that having all the servers housed in a single DC, or even in a single city, is a risk. Maybe now the business will take some action, but, given the general lack of respect or attention to the fact that, like it or not, they are a technology business, I have my doubts.

02 July 2006

Internet3.0

Robert Cringley makes an excellent point: we should own the "last-mile" infrastructure ourselves. Instead of farming it out to that bunch of robber-bandits the phone and cable companies, we should build and own it ourselves, co-op style. He quotes Bob Frankston as proposing that this last-mile infrastructure be implemented as Fibre To The Home.  (Unfortunately the second half of his article meanders off into a meaningless rant about Microsoft that does nothing to further the discussion of community-provided infrastructure.)

Now, self-built-and-maintained local-loop optic-fibre infrastructure may be feasible in the more densely populated parts of the USA, and possibly Europe, but no way here in Africa, least of all in a rural area such as I choose to live in.  Far more reasonable for us to look to WiFi for that answer.  Wireless makes a lot more sense in most locations, anyway, in that the maintenance burden is much smaller, being localised to the wireless nodes themselves.  Fon is targetting precisely this space, and I wish them much success with the model.

The fatal weakness in the scheme is still the backbone.  Fon, in common with Frankston's idea, both assume that the local loop connects to some "large infrastructure backbone" provided by ISPs who will remain neutral bit-carriers.  Dream on!

Furthermore, there is the interesting (to me) question of whether it is at all possible to maintain a global internetwork during the disruptions likely headed our way as we descend from the cheap-oil plateau.  It takes serious amounts of energy, time, money and organisation to maintain a large-scale wired infrastructure such as existing telephone and cable networks. 

Currently my 'net access is via the state-monopoly phone company, Telkom, who are either totally bent on network control and continuing access restriction (resulting in the most expensive network access in the world!) or they are simply total incompetents: they can/will not provide proper two-way network access.  It is impossible to run a server at my end of the 'net, due to the configuration of their firewalls and proxies.  This is not a network!  Something that telcos are constitutionally incapable of understanding due to the nature of the networks they have been running for decades.

The whole discussion of community-provided infrastructure resonates with something I have been giving quite a bit of thought lately: Internet3.0 - The Community Provided Internet.

Drawing on the theme of Web2.0, characterised by much web content being generated and provided, edited, filtered, and rated by the community,  together with Frankston's idea of community-supplied last-mile wiring (whether fibre, WiFi, WiMax, laser or carrier pigeon) I believe we should be building community-owned-and-run long-haul networks - community-driven Internet backbones.

I am well aware that there have already been some successful efforts to build trans-America wireless mesh networks, and this is precisely the model I think we should adopt.  I do not propose or expect that we would aim to replace existing wired infrastrucure.  Wired networks have distinct reliability and bandwidth advantages over wireless; this is inherent in the physics and operating environment.  We can and should, however have alternative routes for IP traffic that reside outside the hands of corporate and government control.

This last issue is difficult. Many repressive regimes would and do restrict access to wireless spectrum, including South Africa where it is technically illegal to establish a wireless link to your neighbour without a license.  Licenses are unobtainable, and the charge for a license is prohibitive.  Fortunately we have a strong tradition if civil disobedience in such matters!

The time to build a global wireless mesh of networks is now.
Related Posts Plugin for WordPress, Blogger...