To use this website fully, you first need to accept the use of cookies. By agreeing to the use of cookies you consent to the user of functional cookies. For more information read this page.

Personal Blog

This post is written for a particular family member who wanted to know what this means.

By the year 2025, BT aims to abolish the good old POTS or Plain Old Telephone Service, also known as the Public Switched Telephone Network or PSTN.

Wait? What?! Does that mean no more landline calls?!

No. Let me explain how the system works presently. We have a hybrid system where our internet is sent on approximately three-quarters of the bandwidth of the line that we use. We use Copper To The Cabinet* (CTTC), Fibre To The Cabinet (FTTC) or Fibre To The Premises (FTTP) to deliver an internet connection. Broadband as it is called, refers to the fact that three-quarters or more of the cable are given to the internet whilst the remaining quarter is given to the phone line or PSTN.

When broadband first arrived, to cope with the increased internet speed and the ability to use the phone and internet at the same time the phone signal was compressed. This meant it went from taking a full 100% of the bandwidth to just about 0.5% (0kHz to 4kHz). The rest of it went to the internet connection where 87.5% of the line was given to download (138kHz to 1104kHz) and the remaining 10% to upload (25kHz to 138kHz). Finally, to ensure that the analogue PSTN does not receive interference, there is 2% of the bandwidth reserved between it and the upload. Remember, the way that these signals are transmitted is as frequencies in pulses. This also explains why upload tends to be a lot less than download. Below is how this is all separated out:

As you are no doubt aware, removing the copper cabling that is in use at present and replacing it with fibre makes the bandwidth increase so faster connections are available.

The abolishment of the PSTN from the signal would have an increase on speeds because that existing 25kHz would allow the internet to use that instead.

So what would happen to phone lines?

The purpose of this article was to inform someone in my family of what we have recently chosen to do in our own home - abolish the PSTN from it. Yeah that's right, as of this week we've got no PSTN telephones in the house and we now use a PBX powered by Asterisk (that I setup back in November for my own line) and a bunch of SIP phones. Businesses have done this for years, but the flexibility of these phones is what makes them great.

When BT gets rid of the POTS and PSTN, we will all use phones running on VoIP or Voice over IP technology, basically internet phones. But that doesn't mean you need to buy any new phones, BT or whoever is in control of the phone line in your home will need to provide a compatible option to connect these phones to the new VoIP network.

I'm a bit of an expert now on SIP, VoIP and the PSTN so if you've got any questions just fire away!

*officially, Copper To The Cabinet is not a thing, it's just what I've called it here!

Just a few days ago I was saying how Microsoft is becoming one of the tech greats again - Windows 10 finally feels like a decent OS (although it never will be macOS or Linux and will always have flaws at it's core), their cloud platform is really quite something, opening the .NET library to everyone and making it cross platform and their awesome work with the latest version of Microsoft Office. Then they pull a stunt like buying GitHub.

Microsoft has always been very committed to GitHub since apparently they made the most contributions to repositories, which is nice to see, but it's sad to see a company, whose CEO once referred to open source as a cancer, whose main products are all proprietary, paid for, non-open source, buying a company who is committed to making open source a big thing.

Of course I use GitHub - who doesn't in the software development world. I have my own private repositories where I store the latest versions of Dash and ZPE amongst other software but from my point of view, particularly from the point of view of integrity, I am worried about the future for GitHub. If Microsoft pushes new restrictions as they have done in the past (for instance, the shutting down of the new free, non-Microsoft developed Halo 3 remake on PC) then GitHub may not be the place for open source developers to put their faith into.

I'm not being critical of Microsoft here, by the way, I'm just pointing out that I don't think their $7.5 billion purchase was the right move for the community.

As of 17th of June, I am no longer a paid member of the GitHub service. I'll be moving to my own private Git repository at some point soon.

It's true. We're finally on to BT's Superfast package! Finally our internet connection at home is fast enough to download a server backup each day without sacrificing the whole connection.

After doing a speed test this morning I have noticed that we've had a huge increase in speed. From 12Mbps all the way up to as high as 48Mbps, we're getting a much better all-round experience.

Upload isn't bad either - we're getting 8Mbps which is 16 times faster than the 0.5Mbps upload we got yesterday!

I thought I'd share some Linux wisdom with you all. Today I'm talking about symbolic links.

Until recently I have been making my live site a direct duplicate of all content of the development site. This meant that I needed to have two copies of all static files. Uh oh. For instance, my photo gallery on my website is about 400MB in size, so that's 800MB used for the photo gallery between the development site and the live site. 

Overall, the method described is expensive and isn't necessary. I have been for quite a while considering symblinking the two to avoid static content being duplicated. Alas, it has been done. I now have a new section on the web server called user_content - a place where all user content that is identical between the live and development websites will go. This not only simplifies the copying of content by no longer needing a manual copy between the development and live sites, but it also reduces the storage space that was wasted with the old design.

For example:

Bash
ln -s /www/user_content/jamiebalfour.com/files/gallery /www/sites/jamiebalfour.com/public/live/gallery/content

simplifies the whole process of the gallery updates on both the development and live sites.

Overall, using symbolic links has made huge differences to my web server.

Today I attended the Amazon AWSome conference and today I decided in the next few weeks I will move over to use AWS in more and more of my projects.

The conference was very useful because it gave me an insight into how I would use AWS but it also covered the basics of getting started and how I can migrate to the Amazon cloud service. I found the talk interesting and I found that the presenters were well informed on what they were speaking about and within the first part of the day I decided it's time to move to using it.

So what did I learn? Well, perhaps most crucially, I learned that it's not as daunting as I first imagined and that they have most of the features I currently have available from day one. I also learned that it's not going to be overly expensive to make the shift - perhaps cheaper in the long run too.

Posted by J B Balfour in Tech talk

I'm sure that a lot of you will remember a lot of revolutions in technology, but perhaps the biggest one was when we moved away from parallel ports, slow serial ports and eventually even PS/2 and moved to USB. Eventually, the competition started to appear such as FireWire and eSATA, whose focus was mostly on storage as opposed to general (or universal) use. Both have pretty much vanished. eSATA attempted to side with USB by forming a combo port, but unfortunately did not last, partly due to the lack of robustness in SATA and eSATA cables and partly due to the fact that SATA as a standard has all but been replaced by PCI Express.

We all knew that when PCI-X and AGP were superseded by PCI's younger sibling, PCI Express, that it had come back with a vengeance. Boasting 2.5Gbps across a single lane all the way back in 2004, PCI Express was bound for success.

A single lane PCI Express card

But it's not until now that one of the things I've dreamt of has become a reality. Back in 2011, I proposed on my blog an idea to expand PCI Express into external GPUs through the use of ExpressCard - an exceptionally clever use of both PCI Express and USB to make a single standard capable of multiple speeds (sounds familiar, right?). Of course, eGPUs did exist for ExpressCard, but they were slow and cumbersome - not something many people would want.

Behold, Thunderbolt! Thunderbolt was original only capable of 10Gbps, which is the speed of a single PCI-E version 3 lane - that's a 16th of a desktop graphics cards maximum bandwidth. So next came Thunderbolt 2, capable of twice the speed, we're now talking about 20Gbps, and eGPUs were now actually possible. The problem with Thunderbolt 2 is that there were very few eGPUs made. One possible reason for this is because the majority of computers featuring Thunderbolt 2 were Macs. Not many PCs were built with the capability to use Thunderbolt 2, and it makes little sense to make an eGPU for Mac users since the majority of us don't intend to play games on our Macs and use them more for productivity. 

Thunderbolt now uses the USB-C connector

Of course, the natural successor to Thunderbolt 2 was Thunderbolt 3. Prior to the announcement of Thunderbolt 3, Intel was working on improving the old USB standard to feature a more robust, smaller, yet more capable USB connection. Since USB Type A and USB Type B already existed, USB Type C was the name for this connection. USB Type C, or USB-C as it is often referred to, offered up to 10Gbps over the USB standard, aptly named USB 3.1. Within a few weeks of USB-C being announced, it was made clear that the new USB-C connector (remember, USB Type C is the name of the connector, not the standard) would also become the connector for Thunderbolt (Thunderbolt has always been lazy since originally it used the mini DisplayPort connector as the primary interconnect for Thunderbolt). 

USB-C as the primary Thunderbolt connector meant that this new connector offers USB 3.1 speeds of 10Gbps, DisplayPort speeds of 17Gbps, HDMI speeds of 18Gbps and Thunderbolt speeds of 40Gbps, all over the one connector. To me this is awesome. This means so much will get replaced by this connector. 

Let's take a look at what this connector is directly capable of:

  • Thunderbolt 3 - x4 PCI Express Generation 3
  • DisplayPort 1.2
  • HDMI 2.0
  • USB 3.1
  • VGA - through DisplayPort
  • DVI - through DisplayPort or HDMI
  • USB 3.0
  • Native PCI Express cards supporting so many connections such as FireWire, eSATA, RS232, LPT and much more
  • PCI through PCI Express converter
  • Thunderbolt 1 and 2 devices

Because of the ability to connect straight on to the system bus (PCI Express), the system can indeed use many different PCI Express cards directly. 

So, whilst I originally was concerned that Thunderbolt would destroy all other connectors, the Thunderbolt standard seems to have added better native connectivity to older standards than before, which is amazing. One connector for all seems to finally be true.

As a developer, there is one thing that is at the top of my list of things that I need to decide on - the text editor.

The development environment needs to be pleasing and make you feel comfortable (whilst developing Dash I feel quite the same way, if the content management system isn't user friendly, you can't be comfortable using it). I've been through a lot of editors - starting with a bunch of versions of Visual Studio, including Visual Studio 2005, 2008, 2010 and 2013. They are all brilliant and I'm glad that I made the choice to use them for about 7 or so years whilst I was a .NET developer.

Things changed quickly though as I became a developer based on Mac OS X. I was forced to find a new editor that suited my development purposes. When I stopped developing in VB.NET and C# and began developing Java, HTML, CSS, JavaScript, PHP etc. I found that I needed to find a new IDE that would suit those purposes. For the vast majority of those (all the web based ones) I used Aptana Studio 3. Aptana was brilliant but it quickly felt dated but I just could not afford the time to get a new editor without being certain that it was right for me. A good IDE needs to be extremely colourful (because that helps highlight different syntaxes), be fast and not prone to crashing (as Aptana eventually started doing) and be feature rich. For me one of the most important features of the IDE is support for SFTP. Aptana offers this out of the box. I then moved from Aptana to Eclipse with the Aptana plugin - pretty good to be honest. 

Eclipse is brilliant for Java development, and I still use it because it can compile a JAR file in so few steps, it can interpret and debug programs well and it just feels like it was designed for Java. However, Eclipse was eventually laden with the same bug that Aptana has and would crash from time to time - particularly when in the Web perspective.

So I made another move, this time to Adobe Brackets. I jumped on the Brackets bandwagon when it was pretty young, and I loved it. Syntax highlighting is lovely, it's feature rich and it's open source. Unfortunately, this jump was too early - Brackets just didn't have everything I needed. In 2015, I started an Adobe Creative Cloud subscription. As a result I gave Dreamweaver a try and I liked it (looking back, I don't know why I liked it really other than the fact it had SFTP built in). 

Introducing Atom

Atom is now my favourite text editor. After being introduced to it by a colleague at work, I feel like I've come to love it. It's colourful, well designed, doesn't crash and has everything I need from a text editor or IDE. 

Atom is my new IDE of choice

Why is Atom nearly the perfect editor though? Well my first reason is that Atom has clear colouring - it's dark interface clearly defines the background from the foreground and its syntax highlighting is bright and stands out well. On top of this, Atom features a plugin system that means that if the feature you want is not available, it's likely to be available as a plugin somewhere. Atom is fast - it doesn't slow down too much as files get larger - I'm talking about PHP files, which I always break into logical files which rarely exceed 3,000 lines. 

People may say what about Visual Studio Code, since being from a Visual Studio background surely I'd like that? Well yeah I do. But I found Atom to be even nicer.

I think that if you are reading this and looking for a new text editor with a beautiful touch to it, Atom is well worth a try.

If you have a different favourite, I want to know what your favourite editor is. 

Whenever I am asked why I bothered building a personal website or why someone needs a personal website my reply is often something along the lines of 'it's fun' or 'it's my hobby'. But I very rarely touch on the benefits of my personal website.

There are a huge number of benefits to my own personal website. I get around 500 visitors a month on my website. I use it to showcase my work to potential employers, to get myself on the internet in a public way that people can connect with me through but there are also other things. I enjoy learning and teaching, so my website is also a source of information where I put tutorials to help others learn stuff that I know. 

But really what's the benefit? My first answer is that it's professional. The brand that my website pushes forward gives me a uniqueness that appears on all of my work now. The orange and blue theme of my website is also apparent on my CV, any letterheads I send and on certain emails. This looks highly professional and people like to see this. I also believe that having your own brand puts you above others who do not. 

The second reason that having a personal website is that the website is, well, personal - it's all about you. LinkedIn is great for connecting but it's full of other people too. Go to jamiebalfour.com and who do you think you are reading about? That's right, some guy called Jamie Balfour. There's nothing about John Szymanski or Murray Smith on there (well there might be). This keeps the reader focused on you. You can write soley about how good you are and all of your achievements and yeah, be a narcissist, blow your own trumpet!

The third reason I would say having a personal website is a must is because it gives people an easy way to read about you. A personal website allows people to read about you from all corners of the globe. Social media is great, but it's also ladden with other things, like other people, a like LinkedIn.

I will admit my website is more of a personal project that evolved into something more. For anyone in computer science it's pretty nice to show that you can build a website from scratch, so I did exactly that (it shows a lot of perseverance too). 

Ultrabooks are amazing devices - combining high end mobile computing power with a slim design and decent battery life. The first ultrabook released was probably the MacBook Air, a moment I remember like it was yesterday. I've always really liked them and have always had an interest in them but never went out and got one.

In the last few years I've been following the development of one or two of them but one came to my attention - the Razer Blade Stealth. This ultra portable features two USB 3.0 ports and one USB-C port. The USB-C port also supports 40Gb/s over Thunderbolt allowing Razer to take advantage of the PCI-Express standard built into the laptop itself. As a result, Razer have developed the Razer Core, an external dock which features a full PCI-Express x16 slot in it. This allows you to insert a discrete desktop graphics card into the dock and use this as the laptops graphics processor. This is why this Ultrabook excited me so much.

Now the latest Razer Blade Stealth has arrived in the UK and is available to buy from their website.

The thing is I'm saving my money and until I decide to sell my MacBook Pro, if that ever happens, I cannot be buying this laptop. 

Posted by J B Balfour in Tech talk

A couple of months back I was the victim to a website (not to be named) that was hacked and ultimately gave the information of it's users away that ultimately included my information. The reason behind this was that passwords were not stored in a effective manner. This meant that the minute you have access to the database you have access to all of these passwords.

What this now meant for me was that they had my email address and my actual address and began to subscribe me to many things I would never sign up to whilst also sharing my user name and password details on the web. It's a cold and horrible thing for someone to contemplate doing because I had done nothing to them in the first place for them to launch an attack at me. And to be honest the website who was a website who's sole duty was to help others - so it's pretty cruel to do that. Anyway, storing details about people in a secure manner is an important factor of online security.

What an unsecure database may look like

In a world where security is not a thing, algorithms such as the SHA (secure hash algorithm) would not exist in the field of security. In fact, the field of security would not need to exist. But unfortunately, because there are people who want to either steal something or just for the sake of it damage something, we have to compensate for this by developing secure ways of storing information.

In the world without security however, passwords could be stored as plain text - simply as they were typed in to the text input. This means that anyone who has access to the database can then scroll down to the appropriate field and read their password. Unfortunately however, if a hacker gains access to this information, they have access to the raw password - that's the password they can use to login to the system. This is not good. So database designers and web developers and so on go a step further and use some kind of algorithm to conceal the password.

How to store sensitive data effectively

When data like a password is put into the database it should be encoded using some kind of algorithm.

The first way of storing passwords is to create or use an encryption algorithm to encode the password and a decryption algorithm to re-obtain the password from the cipher text. This method is uncommon because it means that there is at least one method to decrypt the password in the database, and therefore leaves open a security vulnerability (if someone obtains this decryption algorithm and the key needed to decrypt the passwords, they can simply decrypt every password and it's easy enough to figure the key out if you have a password and it's cipher text). 

The most common algorithm is the SHA because it's been guaranteed to have a one-to-one mapping from the plain text to the cipher text - meaning that no two passwords generate the same cipher text. When this algorithm is applied it is designed to be irreversible, that is it is impossible (or at least near impossible) to figure out what the original text was (at least without going through each combination of characters and testing it against the cipher text). This method is more secure than the former since it does not offer a quick way to take a cipher text and turn it to a plain text. 

These are just two ways of storing passwords but you can probably find other ways. I use a combination of both on my website (my own hashing algorithm and my own encryption algorithm on top).