Friday, 10 March 2017

How to replace your PC with a Windows 10 tablet

Welcome to this post on how to replace you PC with a Windows 10 Tablet, a.k.a what I think my parents should do.

PC vs laptop vs tablet

My parents are in a situation that may sound familiar to many, they have an aging desktop PC that isn't often used but still prefer the traditional desktop set up and 'performance' of a normal PC to using a laptop or tablet when doing work but also like using a tablet for entertainment. My parents have never liked laptops much, so they would be unwilling to shell out for one. They are finally admitting that it's time to update the old PC (mostly because Vista will no longer be updated!) and they want a bigger tablet than the 9.5 inch Amazon Fire they've got at the moment. But buying a new desktop PC and a new tablet is going to end up costing a considerable amount. So, while we were perusing the technology section in John Lewis, I had a bit of a brainwave - why not find a device that can do both?

This blog post is the result of my searching into this possibility. I would seriously consider this option myself, if I didn't already have plenty of devices to suit my needs.

Replacing your PC with a Windows 10 tablet

First things first. I decided to suggest a Windows 10 tablet because it would run full Windows with no compromises. Sure you could run certain apps on an Android or Apple device, but it wouldn't be quite the same as having a tablet built to run Windows.

I had considered a couple of ways that it might be possible to use a Windows tablet as a PC. You could potentially cast your screen to an additional monitor (through something like a Chromecast or the Miracast feature built into some windows tablets) and hook up a Bluetooth keyboard and mouse. You could also use a micro-USB to HDMI adapter in a device that supports Mobile-High-Definition-Link (MHL), as long as the monitor supports that functionality. I thought about these options, but they would require a lot of fiddling around and potentially needing to buy a load more gear (my parents don't have a Bluetooth keyboard or mouse, for example) and are, apparently, unreliable and suffer from lag.

I settled on suggesting a Windows 10 tablet that has it's own video out port (either a micro-HDMI port or Mini-Display Port) and a USB hub of some description. I have only looked at tablets by Microsoft and Lenovo because these are the brands my parents were looking at, and they seem to be pretty solid options.

Windows 10 tablets by Microsoft

Luckily, Microsoft has already thought about this use-case and released several tablets that were 'built to replace your laptop'. There's the Surface 3, Surface Pro 3 and Surface Pro 4. These are all highly rated, but I'm going to rule out the Surface Pro 4 straight away because it's too expensive for what my parents need.

So that leaves us with the Surface Pro 3 and the Surface 3. Unfortunately, these are both now much harder to find, as Microsoft seems to have stopped shipping them in favour of their latest model. This doesn't mean it's impossible to get one, just that it might not be found as easily in major shops.

The Surface Pro 3 has a 12 inch screen, weighs 800g and should have a battery life that lasts all day. It comes in a range of models: from the lowest i3 processor, 64GB memory, 4GB RAM; to the highest i7 processor, 512GB memory and 8GB RAM. It comes with a kickstand on the back that can be fixed at any angle, and it has an optional Pen and Type Cover for extra usability. I probably wouldn't bother with the extras, as this tablet isn't intended to be used as a laptop. The Surface 3 is cheaper and smaller, with a screen size of 10.8 inches it's not much bigger than the tablet they already have (which is too small). It is lighter than the Surface Pro 3 but also less powerful . The choice between the two in this situation really comes down to size and weight. The Surface Pro 3 is bigger, but also heavier; however it's extra power should give it a little bit more of a usable lifespan, before we need to have this discussion all over again!

Surface Pro 3

Windows 10 tablets by Lenovo

Of the other manufacturers, Lenovo seems to be the best. They make several tablets and 2 in 1 devices, but the range I'm thinking about is their Lenovo Miix Range. The 310, 510 and 700 ranges match up closely with the specs of the Surface 3, Pro 3 and Pro 4. Lenovo seems to be able to produce these tablets at a significantly lower price point and in a lighter weight package. The only downside is that the keyboard is bundled with the tablet, which is probably never going to be used.

Lenovo Miix 310


Both Lenovo and Microsoft tablets have Mirco-HDMI ports, which can be used to drive almost any screen if you have the right adapter. There are several docking station options, which adds convenience but could be pricey. There are also several USB hub options, the main decision here is how many devices you want to plug in at once and whether there needs to be an external power supply to drive all of the connected devices. There are several powered and non-powered options, the difference being that hubs with their own power supply can handle more devices and it won't be draining the battery of the tablet but at a higher price point.

Anker 60W 7-port USB hub + 3 Power IQ charging ports

Final Verdict

It is clearly possible to have just one device that works as a PC and a laptop. By picking the right set up, it can be possible to have a virtually seamless experience across tablet and PC scenarios. I think any of these configurations could work, as long as you think the tablet you buy can handle the tasks you're thinking of throwing at it. I wouldn't worry much about memory, that's what external hard drives are for! For those of you thinking about ditching that ageing PC, this could be a great option.

For my parents, please just ditch that old PC!

Friday, 23 September 2016

How Humans can gain senses

You may have wondered, why can't humans see in the dark, or could we learn to see radiowaves etc. the short answer to why we don't have these sensory systems is because of the evolutionary cost compared with the very small amount of gain, if any. It would be incredibly energy and time consuming to evolve good enough night vision when we can just go to sleep at night and see during the day. So far our senses have served us well, but what if we could give ourselves new senses with the aid of technology? After all, we can invent new technology much much faster than mother nature can invent X-ray vision eyeballs!

Neuronal plasticity is the ability of neurones to change their connectivity with one another in order to gain new functions or to modify existing functions. This neural plasticity is essential for memory and learning and gives us the ability to perceive the world around us. By introducing a new input to the brain, you will trigger the process of neuronal plasticity as the brain tries to work out what this new information means and to do with it.

There have been numerous investigations into whether gaining or replacing senses is possible– unsurprisingly a lot of this research has been done by the US military. If you'd rather watch a video about this, David Eagleman's TED talk is excellent. 

Image result for new senses for humans
Check out David Eagleman's TED talk here
Blind people have been doing this for years with Braille, what starts out as just a series of bumps can quickly take on a new meaning as powerful words,  just as when a sighted person learns to read. If you take this line of thought to the next level, you can teach a blind man to see without even bothering with eyes! 

The FDA recently approved just such a device, called BrainPort. It uses the most sensitive part of our bodies – the tongue to decipher patterns of electrical impulses generated by the camera mounted in a pair of glasses. Within about a week, blind people can perceive shapes, size, distance and movement all through electrical impulses to the tongue. 
The Brian Port device and how it works
Going back to Eagleman, his device is called VEST (Versatile Extra-Sensory Transducer) and it essentially transforms the waveform of sounds into vibrations that you can feel on your body, which you can eventually learn to translate into experiences that were usually felt by the ears. Apparently it has been a success and work is underway to make this a viable replacement for human ears. 

If you're looking for something similar that you can buy yourself, I recently stumbled across this incredible ridiculous backpack with a subwoofer built in! 

There are likely many more ways in which this "sensory substitution" could be used in the future that could be transformative for both the disabled and healthy populations. If you like you can share, tweet etc this post if you enjoyed it and spread the word! Feel free to follow this blog and leave a comment below as well.

The Atlantic
Incognito: the secret lives of the brain, David Eagleman (2011)

Monday, 19 September 2016

Medical Writing: Do you need a PhD?

When I was looking into becoming a medical writer, I was just at the end of my MSc and trying to decide whether to try and get a job in medical communications or to do a PhD and then move into Med Comms later. The short answer is no you don't necessarily need a PhD to be a medical writer, but some employers think you do and it certainly seems to be the ideal. 
After looking around on the internet I found a few forum posts asking about the same thing, "should I do a PhD to get into Med Comms?”. Most people said no not necessarily. But as far as I could tell, all of them did have PhD’s and were just telling people well maybe yes and maybe no.
Since getting a job as an associate medical writer without a PhD, I've heard a lot more of this conversation occurring within the industry and especially at careers fairs. A lot of people do have PhD’s and quite often post-doctoral experience. But that is normally because they started out in academia and then discovered medical writing, not because they thought that experience would help them get into medical writing.
I recognise that my position is not  common, I proactively went to an employer that happened to be thinking about taking on junior staff. I also advertised my CV on some job sites and I was later contacted by a few agencies. Some thought a PhD might be useful and some didn’t, so don’t be disheartened if one company says no.
As far as I have learned, moving up to more senior positions will not be hampered by not having a PhD either, as the skill set of an academic is somewhat different to that of a medical writer (although not entirely).

Getting that first job is what counts; you can gain far more relevant experience from working within the industry than in academia
I think that doing some science blogging may have helped me to get the position by demonstrating an interest in writing about science and anything that showed interest, capability and initiative is going to help you stand out from the crowd.
My advice to anyone thinking about this line of work would be to talk to potential employers and see whether you would be a good match in terms of culture, major therapy areas, the role that you can expect to take up etc etc. Remember these people want to see potential in their junior staff, someone they can sculpt into a successful writer at their company.
In my opinion, going through the stress of getting a PhD if you don’t want a job that requires one is an option to avoid. If you can get a foothold in the career that you want to end up in and gain some relevant experience, then that would be the obvious choice to me.
Feel free to leave any questions in a comment below and I’d be happy to talk about this in greater depth.

Tuesday, 13 September 2016

Exploring the guidance computers used in the first Apollo flight missions

A man named Francois Rautenbach has recently become the owner of an Apollo Guidance Computer (AGC) from the Apollo missions. The type of computer he has was the first computer that was launched into space as part of the Apollo missions and Rautenbach argues that this was the first computer computer to ever use integrated circuits, or microchips (as opposed to mainframe computers with big paper tapes) and the first computer to have re-programmable software.

To be clear the exact computers in the videos below appear to be a test AGC with no memory modules - that may have been used to develop the software that eventually sent man to the Moon - along with the actual memory modules used in the first test flight of the guidance computer system in a rocket, flight AS-202. Both the development computer and the memory modules paved the way for the eventual moon landing in Apollo 11.

You'd be forgiven for thinking that something like this would be stored in a museum somewhere; but no, it was all scrapped and sold at auction to a man in Texas. After attracting some attention for selling some of the other pieces of computing and space history on eBay (even though it was all legitimate and his to sell) he went quiet and still remains anonymous. 

Rautenbach managed to befriend this mystery man and went over to see the AGC for himself. He even managed to get the memory modules shipped back to his native South Africa, where he has published a series of videos of him unboxing, reading and even taking X-rays of them. 

Video: The computer suspected to have been used to code the software for the Apollo missions

Flight AS-202

Flight AS-202 was the test flight for the guidance computer before the ill-fated Apollo 1 mission. The type of computer that these memory modules hold the software for were the guidance computers used in all the unmanned Apollo missions - a more sophisticated evolution of this computer was used for the manned flights. 

AS-202 launch.jpg
Flight AS-202 launched in August, 1966

The rocket (a Saturn 1b with Apollo service module and command module) reached its desired height, ran a few tests and then crashed itself into the sea where it was recovered by the US Navy.

Memory modules

Rautenbach has managed to extract the raw waveform data stored in the memory modules for flight AS202 and is currently preparing to release the full set for anyone to see. He and some of his counterparts are aiming to analyse these to get to the full raw binary code of 1s and 0s and to eventually reverse engineer the software on this computer. They previously achieved this and made a virtual emulator of another Apollo mission

Video: Reading the memory modules for flight AS-202

If you're not interested in the memory modules, at least watch the video from this point for footage from on board the rocket.

Rope memory and magnetic cores

Unlike computers of today, where information can be readily written, read and overwritten, these early computers tended to have read only memory (or ROM), because they were hardwired. Literally, in order to store any piece of software coding, the individual instructions had to be wired by hand by women in a factory with the aid of the machine seen below in the video. They even became known in MIT as little old lady memory or LOL memory because of their manufacturing methods! 

Video: construction of rope memory

After a little digging around on Google and in YouTube comments, I found out a few more things about this software: 

    • A bit is a single binary digit, so a single 0 or 1 - get it? 
    • Each instruction or word of the code (add, compare etc) is made up of 16 bits - most programs nowadays use 32 bit or 64 bit instructions
    • Each magnetic core (we'll get onto how these work later) can accommodate 128 individual wires, divided into 8 strands of 16 wires. So each core contains eight individual 16 bit instructions
    • As in the video, with 512 cores each containing 128 bits, each module contains 65,536 bits or 4096 instructions or 65 Kb 
    • Remember a single byte (B) is 8 bits (b)
    • In the video each module contains about half a mile of wiring to get 65 Kb. If you scaled that up to the 64 GB (Gigabyte) of memory you can easily find in SD cards, you'd need 4 million miles of wiring! 
    • 64 Kb = 8 KB, 64 GB/8 KB = 8 million;
      8 million x 0.5 miles = 4 million miles

The magnetic cores themselves are very interesting too. An electrical current of a certain polarity, causes the magnetic core to be magnetised in a certain direction, with the opposite current causing the opposite direction of magnetisation. These can be signified as 1 or 0. To change the magnetisation you change the polarity of the input current. 

If you then wire two cores together, you can create a system by which changing the magnetism in one core causes the opposite change in polarity in the other core i.e. you can transfer bits from one core to another in a circuit. 

If you scale that up and add in various extra complications that I don't understand anywhere near well enough to attempt to explain, you end up with a memory module for a rocket's guidance computer.

If you want to learn more about how computers used to work in those days, or would just like to amuse yourself with an old fashioned US military training video, take a look below: 

Video: Magnetic Cores I - properties

Video: Magnetic cores II - simple circuits

Friday, 2 September 2016

Where to find accurate information on news about drugs for Alzheimer's disease

The recent publication of a clinical trial for a drug (Aducanumab) being developed to treat Alzheimer's disease has hit the mainstream media big time. In all of this commotion, especially when the reporting may not have been written by someone familiar with the field, it can be easy for people to get swept away by the huge media buzz and feel like "this is it! the new wonder drug everyone has been waiting for is finally here!" 

Except that's not quite the case, as it rarely ever is when a scientific story hits the news. Whether its the BBC, online science blogs or your tabloid newspaper of preference, all of these news outlets are competing for your attention and are striving to be the first to break the story. 

I won't go through the study myself, because I'm about to point you to two excellent sources that have already done a brilliant job of it, but there are a few things I want people to clearly see:

  • This is quite an early study and the failure rate of drugs that have reached this stage is still high, especially in Alzheimer's

  • It's primary aim is to show the safety and tolerability of the drug, it is not designed to give clear evidence about its effects on cognition, as the cognitive measurements were clearly listed as "exploratory"

So where can you go to find trustworthy information about science in the news? 

This can be quite difficult, especially depending on the subject, but the NHS has a special Behind the Headlines section which often does an excellent job of giving you the information you need, along with the explanation of how it fits into the rest of the scientific picture. 

Their article on the Aducanumab drug trial is available here

For this specific news article, I would recommend you read this excellent article by my old course-mate for Alzheimer's Research UK

In future you might want to check out which has a team of experts that voluntarily review health news stories and press releases.
You might also want to try and access the original scientific paper. These are often more balanced and cautious than the media coverage itself, but journals often require a subscription or some other form of payment to access the full papers. In future you will see more and more journals that make their articles free to anyone (called open access), but the most prestigious journals that hold most of the major stories still require payment. 

Alternatively, if you just want to know more about the set up of a clinical trial, rather than the results, you can check clinical trials databases such as or EU clinical trials register. You may need to find the clinical trial number in order to find the study you're looking for though.

If you do decide to read scientific papers or clinical trials databases for yourself, health news reviews has a good section for Tips for Understanding Studies. Wherever you get your science news from though, it's always a good idea to try and look a little bit deeper. 

Thursday, 18 August 2016

Why is the sea salty?

This is a long standing request that I have been waiting to do, although I wasn't quite sure I had that much to say about it at first. But before we get to why the sea is salty now and how salt is left behind when water evaporates, lets talk about how all that salt got into the sea in the first place. Better yet, how did we even get seas for there to be so much salt in anyway?

Cast your mind back say 3.5 billion years, give or take, when the world was just forming into a huge molten rock. In those days, Earth's early atmosphere contained a lot of hydrogen and probably quite a lot of water vapour too, as oxygen and hydrogen bond readily to form water. But, as the earth was a giant volcanic molten mess at the time, the surface and atmosphere were far too hot for liquid water. As you may know, boiling points are subject to conditions such as pressure (that's why rapid boiling tubes use a high pressure, like in a coffee machine) but also the composition of the atmosphere, which was losing hydrogen through the power of the Sun. The sun's energy can split water into oxygen and hydrogen, which then can escape earth's atmosphere and the relative amount of oxygen compared to water increased. I don’t quite understand why (probably something to do with pressure) but apparently this caused the temperature at which water remains a liquid to increase

As the boiling point of water rose until it hit the 100 degrees Celsius it sits at today, the temperature of the atmosphere gradually fell as the earth cooled. Eventually the temperature of the atmosphere dropped below the boiling point of water, and you have rain. Lots and lots of rain.

While the atmosphere may have been cool enough for liquid water, the surface certainly wasn't. As the train fell and was immediately turned back to vapour it slightly cooled the rocks, and as it turned back to liquid high up in the atmosphere the heat it had gained was eventually lost to space. This cycle could have easily continued for a million years until eventually most of the water on earth was liquid. The water then pooled in the lowest lying regions in the earth's surface to form the oceans.

So how did the sea get so full of salts and other minerals?

There would have been (and still is) a lot of dissolved carbon dioxide dissolved in the early oceans, which is acidic. Other acids like hydrochloric and sulphuric acid may also have been around too. These acids ate away at the earth's rocky surface, gradually adding salts and minerals like common salt and calcium. When life came about, these minerals were put to use. Calcium carbonate salts are used for producing hard shells and coral reefs and sodium chloride salt was used to form the very first nervous system (or possibly potassium chloride, they're is some debate). Even now, in all animals, salt is vital for the functioning of nervous systems and a lot of energy goes into maintaining the perfect balance of sodium and chloride ions dissolved in various cellular and extracellular fluids for optimal performance of the nervous system.

So now we have seas and we have salt, now all that's left is how the salt stays dissolved in the sea when the water evaporates. In essence, salt transitions to a gas at a much higher temperature than water, so water can evaporate at the sea's surface when warmed by energy from the Sun much more easily than salt, which is left behind.

To go a little deeper, common salt molecules are made up of one sodium atom (Na) and chlorine atom (Cl) to make sodium chloride (NaCl) – the salts in the sea are much more varied than that, but for simplicity let's just talk about common old table salt. The two atoms are normally bonded together by ionic bonds; because an Na ion has a positive charge (Na+) and a Cl ion has a negative charge (Cl-) . For salt to dissolve in water, water has to interact with salt molecules more strongly than salt interacts with other salt molecules. As the interactions between different salt molecules is weaker than their interactions with the many many more water molecules in the sea, salt dissolves and dissociates into its component ions (Na+ and Cl-) so that the Na sits on the slightly negative oxygen side of a water molecule and Cl sits on the slightly positive hydrogen side and the ions from the salts become mixed between the water molecules.

As you can see in this image, it is not just one water molecule required to dissolve one molecule of NaCl, but many. Apparently the minimum number of water molecules required to dissolve NaCl is between 9 and 6 for one salt molecule. If you want to get a little more complex you can start to talk about free energy. Essentially, the balance of energy used up by breaking the ionic bonds in the salt crystals and the energy released by forming new polar bonds with water works out in favour of dissolving the salt crystals - and so they dissolve.

Why does salt remain in the sea when water evaporates?

When the sea is heated by the sun, the water molecules gain energy and are able to turn into vapour (called a phase transition) much more easily than salt ions and therefore water evaporates and salts don’t in those conditions.

When the evaporated water eventually cools and rains on land somewhere, it eventually makes its way back to the sea and picks up lots of sediment full of salt and other minerals. Some of this sediment makes its way to the sea and can be used up by all sorts of sea creatures to form their shells or just fall to the sea bed and eventually form sedimentary rock as it gets compressed over many thousands of years.

If you boiled away all the water, you would eventually find that the salt began to reform salt crystals. This is because, as there is less and less water, it has now become more energetically favourable for the component ions to start reforming salts and to form a precipitate; the equilibrium of dissolved–precipitated salts shifts towards the solid precipitated salts. This is sort of what is happening in the Dead Sea, which has an incredibly high salinity, but it's more likely that a load of salts have been deposited there and they’ve just accumulated.

So there you have it, how we got seas, how they got salty and why they remain salty today. Thanks for reading.

Thursday, 30 June 2016

Take a look at how the visual system works

You probably never really thought about this before, and neither had I really until I read about it about a month ago. But it turns out that your eyes are pretty much the least important player in vision. You might be pretty sceptical at this point, so I want you to close your eyes and imagine a boat. Done it? Okay, good, you just saw something without using your eyes!

At this point, you're probably wondering where I'm going with this. ell I'm about to show you how much work has to be put in to be able to see something properly and hopefully you'll realise just how impressive the machinery  in-between your ears really is.

First, let's start with how we can go from an array of photoreceptors in the retina to a complex image of the world around us in 3D. First you have the two types of photoreceptors in the primary visual system; rod cells can detect low level light but they do not give very good acuity and they do not give colour vision, cone cells can detect colours and have very good acuity but they need high levels of light intensity. When these photoreceptors detect light, they send a signal to an area called the primary visual cortex (PVC) at the back of your brain, which you can see below.

Visual cortex Picture Slideshow
The primary visual cortex (PVC) sits just at the back of the brain, with much of the occipital lobe devoted to visual processing in some way.

The PVC deciphers the signals that came from your retinas, works out what's in your field of vision and then sends the signals to numerous other places within your brain focused on memory, decision making and anything else that could create your "conscious awareness of the world". This all occurs within around 200 milliseconds, pretty impressive right?

To create a visual representation of our world so quickly, our brains use a number of assumptions about how the world works so that it can take shortcuts. For example, because we want to be good at spotting other humans and also predators our brains are primed to spot these faces everywhere. Because of this face bias, we are prone to seeing faces where the do not really exist (Jesus on your toast, anyone?). If you want to learn more about these assumptions, look up Gestalt psychology. 

The assumptions our brains make can sometimes be wrong, I'll put down a few examples below, but we've all got our own favourite visual illusions so please post a link to your favourite in the comments if you like. 

This is no photo shop manipulation, but a real room with such a crazy layout that when girls of the same height stand in different corners they appear to be completely different heights. Hint: the bigger girl is higher up and closer to the observer than the smaller girl.

If you look at this mask you'll quickly see that when you look at the back of the mask, it suddenly appears to be pointing out towards you and spinning in the opposite direction.

So now you can see that while it's easy to pick up information, deciphering it requires much more work and we have very sophisticated systems to allow us to create a very vivid and detailed picture of our lives. You really see just how impressive this is when you step away from 2D still images and move on to 3D moving images. Because you're dealing with so much more information with moving images and real life, your brain has to rely on its assumptions and strategies (eg selective attention) even more, which can greatly influence what you notice, try this video below to test out your selective attention abilities.

Here your brain is focusing on one small element in a scene, leading us to miss some pretty obvious goings-on within the rest of the scene. Now this isn't just a clever lab trick or an amusing video, this happens all the time in our daily lives (obviously we don't notice), as you can see from this final clip.

What is out there to be seen and what we actually see are completely different- and they are massively influenced by our consciousness.

If you want to learn a bit more about the neuroscience behind how vision works, this is a good starting point. Also check out this website for a really great tool that allows you to easily see the power of the visual system. 

I hope that I have illustrated the real processing power of the human brain and how easy it can be to understand a little bit more about what is going on behind the scenes in our brains.

Please feel free to comment, share or follow this blog and I'm always open to topic suggestions.