Thursday, 18 August 2011

Popular misconceptions about Science and Scientists

The popular press, and people in general, seem to misunderstand what scientists are, what they do, and what their goals are. This article clarifies these matters.

1. Scientists do not all work in laboratories, do not all wear lab coats, and they do not all perform wacky experiments with chemicals. The only scientists who work in laboratories are experimental physicists, experimental chemists, and experimental biologists. Sometimes they are assisted by lab technicians. Lab technicians are not full scientists; that is, they are usually low-level employees with training in cleaning bottles and other equipment. Mathematicians and theoretical physicists do not work in labs.

2. Sometimes, lab technicians are higher-level students paying off a bursary or study loan. In that case, they may be postgraduate students. A postgraduate student is one who has already got her undergraduate or bachelor’s degree in her area of study, and is now specialising. Her supervisor, or the person to whom she reports, will usually be a scientist or professor. A professor is a university employee who has published a lot of research papers and has been awarded the title ‘professor’. This is because he ‘professes’ to know his area well. Not all professors are scientists. You can get a professor of Engineering, for example; some Engineers do not consider themselves scientists; ‘scientists’ are more theoreticians, whereas Engineers are more about implementation. Most professors have PhD or doctorate degrees. Your general practitioner (GP) or medical doctor usually just has a Master’s degree - one level lower. There are different degrees in different countries, but the full list, in the order of amount of study involved, is: Diploma (in some countries), Bachelor’s Degree, Honours Degree (in some countries), Master’s Degree, Doctorate, Post-doctorate. A professor can have anything from a Master’s to a Post-doc; ‘professor’ is an honorary title given on the basis of research work. Not all lecturers at university are professors; often each department or subject area only has one full professor and several adjunct or associate professors.

3. A scientist is usually a university employee, and usually a lecturer. But some scientists work for parastatal entities or private research labs, pharmaceutical companies, weapons companies, or government-funded research labs. An example of this is CERN - the European Nuclear Research Centre in Switzerland. They have a large particle accelerator underground spanning many kilometers, and study the particles that make up matter. They’re scientists who work for an entity that is not a university.

4. The term ‘scientist’ usually applies to someone studying a science, rather than, say, economics, education, humanities, architecture, and other disciplines. Mathematics, chemistry, biology, physics, are considered ‘hard’ sciences. ‘Soft’ sciences might include the social sciences, such as population studies, politics, etc. One might consider these to be sciences because they make use of statistics - a form of mathematics - in their research. Statistics is not a form of guesswork. It is a strict form of mathematics with strict formulae and rules.

Importantly: not all scientists work in medicine. You often see the remark online that "why are scientists researching this useless thing when they could be finding a cure for cancer?" - well, that just means that the commenter doesn't understand that science has different research areas. It's like saying, "Why do biologists study living organisms when they could be designing rockets to get to Mars?". Scientists have specialist areas of study. They only study the area that interests them. Few of them are interested in medicine. There are a wide range of research areas. Here's a short list: Mathematics (Applied and theoretical), Computer Science, Statistics, Physics (Particle, Newtonian, Astro, Geo, Relativistic, etc.), Chemistry (Applied, Biological, Industrial, Physical, Inorganic, Organic, etc.), Biology (Botany, Zoology, Evolutionary, Palaeo, Human, Micro), Medicine, Psychology (Neuro, Clinical, Industrial etc - some debate about how much of psychology is a science), Sociology (debatable), Geography, Geology, Palaeontology, Archaeology, Anthropology, etc etc. Only one of these deals with medicine.

5. Mathematics and the hard sciences differ in that mathematics is not experimental, and assumes the truth of its tenets, such as 1 + 1 = 2. The hard sciences - physics, biology and chemistry - experiment with observable evidence, and make up ‘theories’ to explain the evidence. So they use experiments and evidence, whereas mathematics does not. Mathematics has a number of branches, such as computational and applied mathematics, which overlaps with Engineering, Physics, Computer Science and Astrophysics, and it also has the branch of Statistics. Some subject areas, therefore, are a bit fuzzy in terms of which areas they fall under. So, for example, you can study computer circuit design in Computer Science as well as Electrical Engineering.

6. The process used in the hard sciences to ‘discover’ something or come up with a theory or law more or less goes like this. A scientist or layperson will make an observation. In other words, he or she will see something happen or that just exists, and want an explanation. The scientist will then use her existing theoretical knowledge to come up with a ‘theory’. A theory is a rigorous, strict, mathematical model of what could explain the observation. The theory includes a predictive phase, in which it claims that if it is true, certain things that have not been observed yet, will be observed under certain conditions. An experiment will be performed to test the theory. If the experiment succeeds, the scientist will repeat the experiment, and then write a paper on it, which will get reviewed. So, for example, we may want an explanation of what water is. A scientist will theorise that water is a combination of Hydrogen and Oxygen, in a 2:1 ratio, formed by heat. She will perform an experiment to test this theory. She will have a ‘null hypotheses’ which assumes that the theory is false, and she will have a ‘control study’ which tests something that is partly unrelated to see if it produces any water, too. So, the null hypothesis will say that water is _not_ 2H2 + O2. And the control will be something else, say; just keeping the Hydrogen in a gas cylinder without heating it in the atmosphere, and seeing if water just appears inside it.

So, next time you hear about a scientific ‘theory’, please understand that it does not mean the same thing as the term ‘theory’ that we use in the phrase ‘in theory, Man United ought to win’. It actually means something much, much stronger. A ‘theory’ is a series of mathematically corroborated facts and/or predictions, in science. It is called a ‘theory’ purely out of modesty. This is because any theory is still open to testing and verification, or further proof or disproof. But theories can _only_ be disproven by proper scientific method, described above. A scientist’s personal beliefs are irrelevant, and a layperson’s opinions are irrelevant. You cannot merely ‘disbelieve’ in the theory of gravity; you’re sticking to the ground quite tenaciously, and that is just a fact. The same applies to other theories; merely disbelieving them does not make the reality of their testability go away. In science, all theories which pass an experimental test, are considered facts. Atomic theory, the Theory of Relativity, and the Theory of Evolution - these are all well-established scientific facts.

7. Once a theory has been verified by experiment, it is usually written up in a research paper called a ‘journal article’. Scientists have their own magazines - called _journals_ - which describe the latest research that they have done. There are thousands of journals. They each specialise in a particular research area. Not all journals are in the hard sciences. For example, you can get journals of history, politics, and philosophy. A paper usually starts with a section called the ‘abstract’, which summarises the paper. When a scientist or other researcher submits a ‘paper’ to a journal, it undergoes a process known as _anonymous peer review_. That is, the people running the journal give the paper, with the author name removed, to the peers or academic equals of the scientist submitting the paper. The author name is removed to prevent the reviewers from being biased against the submitter, or in her favour. The reviewers or peers will then try to replicate or repeat her experiment, and/or they will check her mathematics and reasoning. They will also check that she has done her research properly, that is, read up existing recent research on the topic in their journal and other journals. If she has performed her experiment properly, if her mathematics are correct, if she has ‘cited’ or referred to existing proper research, and if her conclusions follow from her premises of her argument, her paper will be accepted into the journal for publication, with her name now visible to all. Subscribers to the journal, that is, other scientists who buy the journal, will then have a chance to write responses, usually criticisms. The author will then be able to write responses or work on the queries that come along. Strictly speaking, anyone who writes a response to a journal article is welcome to submit to the journal to see if their response is published. So, for example, if you truly believe that evolution is false - go ahead. Refute it in an academic journal. You’re perfectly welcome to do so. There are no entrance requirements, just that the paper is well-researched.

This is how science increases our knowledge. 

8. The more a paper is ‘cited’ or used as base research material in new research papers, the more the work of that paper is respected and ‘rated’. A highly-rated scientist is one who has produced a lot of cited research material. Such a scientist may get earmarked by her university for promotion to professor, and her theory may become accepted broadly as scientific fact.

9. Instrumentalism versus realism. Some scientists consider their theories to be physical facts. They also consider the entities that they describe in their theories - such as atoms, molecules, energy, waves, quarks, etc. - to be real things. These scientists are called ‘realists’. Other scientists, however, are only interested in whether a mathematical model or theory can generate good predictions, and, they are not concerned with whether their theoretical entities exist (i.e., they don’t care if atoms actually exist, as long as the theory works to make predictions). This type of scientist is called an ‘instrumentalist’, because they consider theories to be merely instrumental or useful, rather than fact. My experience of 23 years at a university tells me that most physicists are instrumentalists, but most chemists, biologists and astrophysicists are realists.

10. Funding. Not all scientists are funded by pharmaceutical companies looking for cures. Some are funded by government. They write a research proposal and a funding proposal, and a funding body, like the NRF in South Africa, or the government (DHET), or whatever, fund the research. Often just the university funds it. So, for example, the university research office may slice up the funding pie between different departments; physics and medicine, say. This means that some research funding goes to physics, rather than just medicine. This means that funding is split up from a budget for various projects. This means that not all funding goes to cures for cancer. But if there was no funding for anything but medicine, we’d not know anything about the world… e.g. how to make computers, how to make aircraft. et.c. It is not the business of a layperson who doesn’t understand science to say what aspect of science should or shouldn’t be funded or in what proportion. Many spinoffs come from funding apparently useless studies. So, for example, the astronauts on the moon resulted in Nike sneakers. Bulletproof vests were a result of someone tinkering with chemistry. Microwave ovens were an accidental side effect of a different experiment that wasn’t looking for ways to heat things. So unexpected benefits can come from apparently useless research. Landing on a comet, more recently, has relevance for our knowledge, e.g. of the early universe, and whether astronomical bodies are worth mining commercially.

The only funding in science which is questionable is medical and weapons funding, because sometimes it has corporate interests behind it. There’s no commercial funding for evolution, gravity, relativity, and other theories that have no obvious commercial motives. The way to check medical research for legitimacy is to look for it in journals and see if it has been replicated by people who are not funded by the pharma company. If it has been replicated by uninterested parties, then it’s not a case of corrupt research which is trying to sell people “unnatural” “non-holistic” cures for diseases. Moreover, empirical evidence of the efficacy of “allopathic” medicine makes it perfectly clear that it’s not merely corporate quackery. Most studies of homeopathic and related medical models show that they’re no better than placebo. But this is a debate for another article. Homeopathy is also big business, also a multi-billion industry. So in terms of research being dubious because it makes money - no; 'all natural' medicine is a huge industry. Money is also irrelevant to empirical results. If something works, and it’s in a journal, and it’s been replicated, that’s all you need. Whether a wealthy funder funded it becomes irrelevant at that point, just as the fact that Michelangelo’s work was great, was not disproven by the fact that he was funded by a very wealthy church.

Funding is awarded by applying for it to a funding body. You have to write a long preliminary study showing why your research should be funded, and the reasons can’t be its commercial viability. The reasons usually have to be benefit to humanity.

This is how science actually works. 

Sunday, 14 August 2011

Scientists find cure for every virus and maybe ageing

Scientists find cure for every virus and maybe ageing

According to a New Scientist article, a potential cure has been found for all the viruses they tested. They also now know what causes ageing.

According to [an article in New Scientist](, Todd Rider and his colleagues at MIT have found a way to wipe out any virus - or at least - any virus they tested - including the common cold, flu and AIDS. This is ground-breaking news. Just as antibiotics are a silver bullet for bacterial infections, it seems like these scientists have found a silver bullet for viruses.

For the reader who is not sure of the different types of diseases, allow me to briefly elaborate. There are four types of infections one can acquire: cancerous, viral, bacterial, and fungal.

Cancer is not, strictly speaking, an infection. Rather, in cancer, what happens is a cell reproduces itself incorrectly when dividing in the process we call “growing”. The DNA is not correctly replicated, and a faulty copy is produced. This faulty copy, lacking any ability to shut down or stop, then produces another faulty copy. This creates a tumour, or ball of cells.

Fungal infections are, to put it a simply as possible, typically skin or membrane infections. Fungi are plant-like organisms; the type we are most familiar with are mushrooms, but ringworm and athlete’s foot are also fungal. They spread by means of spores or microscopic root systems, known as microrhiza. They cause their harm by feeding on the surface of your skin or membranes. Most fungal infections can be cleared out by exposure to drought and topical creams. As such, they are probably the least deadly of the various types of infections one can get, even if they can be very persistent. Not all fungi are harmful, however. Some fungi, for example, excrete penicillin as a waste byproduct, which is lethal to bacteria. Penicillin, as you probably know, was the first antibiotic.

Bacteria, on the other hand, are free-floating single-celled organisms; miniature animals, for want of a simpler way of explaining it. Just as your body is made of billions of cells working in harmony, bacteria are individual cells that work in isolation. Bacterial infections typically cause their harm by consuming nutrients your body needs, and excreting waste products that are poisonous to your body. Fortunately, bacteria are relatively large, and cannot invade your body’s cells. The body’s immune system can typically deal with them by attaching lethal ‘antibodies’ to the bacteria. Antibodies are chemicals that the body’s immune system produces. They are manufactured in response to the initial invasion, and are kept in the blood stream thereafter, ready to respond to a repeated instance of the same bacterium. However, if a bacterium mutates - that is to say, _evolves_, then the body will no longer have immunity to that bacterium, and this is where antibiotics come in handy.

Lastly, we get to viruses. Viruses are much more tricky to deal with. Firstly, they’re generally small enough to penetrate any cell in your body. Moreover, they’re not even really alive. This is something of a debate in biology. In the biological sciences, a being is considered alive if it moves, takes in nutrients, excretes, and reproduces. Viruses do not take in nutrients or excrete, and they reproduce parasitically. In fact, viruses are more or less just DNA in a box. They lack the ‘organelles’ or energy-processing parts that a bacterium has. So what viruses do, is they invade a cell, and then force the cell’s DNA to replicate the viruses’ DNA instead. When the cell has made too many copies of the virus, the cell bursts open, releasing the new viruses, who go on to the next cell, and the process is repeated.

Up until now, doctors have simply immunised us by injecting us with vaccines. Now, we know that there were recently fusses in the popular press about vaccines causing autism and ADHD, but that has since been debunked. A vaccine is simply a serum containing dead or inactive viruses. When the body encounters these proteins, it produces antibodies to attack the proteins. Hence, at a later stage, when the live virus appears, the body can immediately attack and defeat it, because it has been immunised. This is the best we have been able to do thus far, and it has been successful. For example, smallpox and cowpox (where the name ‘vaccine’ comes from - vacca means ‘cow’). These viruses were made extinct because of immunisation. But certain viruses that mutate rapidly have been impossible to stop with vaccines - the common cold, and AIDS, for example.

But this is where we get to the amazing new trick that Todd Rider and his colleagues have discovered. When all viruses replicate, they force the cell to generate additional RNA - the primitive DNA found in mitochondria - cell organelles which process the energy requirements of the cell. Rider’s solution is to inject a compound which they called DRACO - double-stranded RNA-activated caspase oligomeriser. What the compound does is force the host cell to commit suicide before it replicates the viruses, or, while it is in the process. The result is that the host cell bursts open and releases only parts of the virus. This is especially clever, because the body’s immune system can then develop antibodies as well. This means, ultimately, that DRACO can kill any virus - by forcing the mutated host cell to commit suicide.

The trouble, of course, is if you are heavily infected - e.g., if a statistically significant percentage of your cells are infected, then you will lose the function of the organ containing those cells. The idea, then, would be to apply DRACO before the infection got out of control.

Note, however, that this is very different to chemotherapy. In chemotherapy, radioactive isotopes are introduced into the body. They kill cells indiscriminately. The idea is that if you kill enough cells, that since the cancer cells are a minority, they’ll die off, leaving enough healthy cells for you to continue to survive. DRACO, on the other hand, is a precision-targeting system; it only attacks cells which have viruses in them, and causes them to commit suicide.

Now why, you may be wondering, would cells be able to commit suicide? The answer is to prevent DNA mutation, or even worse, cancer. When a cell reproduces itself or splits by means of mitosis, it makes a copy of itself using the DNA it contains in its nucleus. However, each time it copies itself, some of the DNA is damaged or frayed at the ends. Hence, at the end of each strand of DNA, there is a series of junk DNA kept there that is safe to fray off, called a telomere, which means “end part”. Once the telomeres wear out, the cell can no longer safely replicate without risk of becoming cancerous, so it commits suicide. All cells have the ability to commit suicide when they detect that their DNA is faulty. This is what happens, for example, when you get grey hair. The pigmentation cells which produce the hair colour have died off. They committed suicide, because they found that their telomeres - the ends or tips of their DNA - had finally worn out, after repeated replication. Hence, you no longer produce pigment, and thus, your hair goes grey or white.

[Interestingly, if the telomeres could be preserved, we could potentially stave off ageing, since it is cell death that causes ageing.]( This is why different species of animals have roughly constant life expectancies; because their DNA is a certain length.

We may just be heading into a future without viruses and without old age. Isn’t science incredible?

Saturday, 13 August 2011

Mac vs PC

Having been a long-time computer user - since 1983, in fact - I believe I have a fairly authoritative opinion on this traditional debate. Let me give a brief synopsis of my personal experience; this will perhaps explain why I still use a Mac.

I received a Commodore VIC-20 in 1983, and learnt to program BASIC (a programming language). In 1986, I got a Commodore 64 - still the world's best-selling particular computer. Around that time, I encountered a Macintosh 512K that my mom had at work. I was astounded by how different it was to the Commodore; you didn't have to program it. It had an operating system. It booted up, and was useful without any programming required. It also had a mouse, which was weird. I was used to joysticks and cursor keys. And then there was the black and white screen. My mom argued that colour was irrelevant since there were no colour printers in the business sector. True enough. The screen quality was, however, much higher than the Commodore. I enjoyed the Mac. I found it very useful for creating documents, something I still do at a frenetic pace.

Around 1988 I joined my school's computer club and found that they had IBM 286es. I found them disappointing. Their screen resolution was lower than the Mac's, and even though they booted up an operating system - MS-DOS - it was virtually useless. You had to explicitly load BASIC, and write your own programs, just like the Commodore. Except, unlike the Commodore, the PC lacked colour and sound, whereas the Commodore had 16 colours and 3-channel fully synthesised sound. I decided then and there that the PC was vastly inferior, and stopped using it. The next year, a friend of mine showed me Windows 3. I was underwhelmed. It had no direct equivalent of the Mac Finder which gave you a literal representation of the files on your disk; it was a glorified application launcher. And it was ugly. I shrugged, and thought that it would never be a competitor. Unfortunately, I was wrong. IBM bought it wholeheartedly, and Windows dominated the corporate sector by the early 90s.

But Windows was still rubbish. It had "token ring" networks and "Novell Netware" which were really crummy compared to Apple's server solutions, using LocalTalk networks, which had existed since 1988. It was still low resolution, and still ugly. Colour matching to printout was appalling; black would come out dark green, beige would come out russet. The mouse was skippy. It still relied heavily on floppy disks. Filenames were limited to 8 characters. It still had to boot up DOS first. It couldn't multitask; whereas at least on the Mac you could run multiple programs, even if they didn't really cooperate (this is a pun - Apple claimed that it had "cooperative multitasking"). By this time, my brother had acquired a Commodore Amiga. This machine had 4096 colours and multichannel synthesised sound - much better than the Mac at 256-colour single-channel. The PC was out of its league; 16 colours and still no sound unless you bought the first "Sound Blaster". The Macs were starting to come out with CD-ROMs by default, and all had hard disks by default. Not so with PCs and Amigas, however. But the Amiga had one thing: True multitasking. It could play music while you worked on a word processor. No other machine I had seen could do this.

In the mid-1990s I saw my first SGI machine - a UNIX machine with true multitasking, 65536 colours, proper sound, no floppy drive at all, a massive hard drive, memory protection, and multiple users. I was shocked. I was even more shocked when I discovered that UNIX dated back to 1969. I also discovered the Internet, and found out that it ran on UNIX. I was sold. I started to lose faith in the Mac. I was on a programmer's mailing list at Apple. I said: I want a UNIX that looks like the Mac and works like the Mac. I was kicked off the list for starting a flame war (abusive series of exchanges). The Mac, I argued, crashed (it had no memory protection). Its multitasking was crummy. Colour wasn't great. Access to Internet was OK but difficult. UNIX solved this. Windows, of course, was still rubbish. Access to internet required expert training. Graphics were just getting to 256 colours. Sound wasn't bad on a Sound Blaster. But still no multitasking, memory protection, or multiple users. Then came Windows 95. I was annoyed. It was a blatant copy of the Mac, just inferior and more complex to administer. I ignored it, and it won the market.

Then in 1996-7 a miracle happened. Apple abandoned their attempt at a modern operating system - Copland. It was incompatible with applications from the older system. They bought NextStep/OpenStep - Steve Jobs' other company (he also owned Pixar). With it, came Steve Jobs. The company woke up and started to shine. He hired Jonathan Ive. The iMac was released. Then the iPod. Apple was in the news. Then came Mac OS X - the re-modelled NextStep that just looked like the Mac. I went back to that programmer's mailing list, and said "I told you so". They had to eat humble pie. I got what I wanted: a UNIX that worked like the Mac. Proper memory protection, proper multitasking, proper server capabilities, a powerful commandline that could do batch jobs easily, proper multiple users, proper internet capabilities, but best of all, the Mac user interface. I was ecstatic. I still am.

A few years later, Windows XP was released. I was annoyed again; the "XP" was an obvious ripoff reference to Mac OS "X" (ten). But I had to give credit where credit was due. XP was based on Windows NT - a system with a journalling filesystem (crashproof, in English), and proper multiple users and multitasking. Not perfect, but good. I also noticed that it had session suspension - you could suspend your login and let someone else use the machine, and go back, and carry on, later. The Mac didn't have this in early versions of Mac OS X. I felt envy for the first time. That was good, and the journalling filesystem was good. Shortly thereafter, Apple caught up and added those features. They were now officially ahead again, because the had the same or better features, but with greater user-friendliness.

Now we come to Windows 7 and Vista. Everyone hated Windows 7, for reasons I cannot understand - all Windows seems horrid to me. Vista, however, seems quite usable. It doesn't quite have all the things I'd want, as a person accustomed to UNIX, but then, I'm a "power user" - someone who pushes computers to the limits of what they can do. Most people wouldn't notice something like that. At the moment, then, the war between Windows and Mac is characterised in terms of feature comparisons. I'm still fairly confident that the Mac has the lead, but the average Joe Soap user won't be able to tell.

There are some good arguments in favour of the Mac, still to this day.
1. It is less-targeted by hackers and virus writers. In fact, because of its strict user-access-permissions policies, e.g. that you have to enter an administrator password to install software - it will remain hard to sabotage even if it becomes mainstream. And if you stick to Apple's App Store, you can be pretty sure you're not installing a "Trojan" (spy) software package.
2. The Mac is feature-equivalent or superior to a Windows machine.
3. The Mac can run a variety of emulators, such as Parallels Desktop and VirtualBox by Oracle, which let you run Windows if you need to.
4. Mac hardware is of a higher quality and design specification, but can be upgraded with humble standard PC components.
5. The operating system comes free with the machine.

If you take the cost of a hardware-equivalent PC, and add the cost of antivirus software and Windows, you will find that a Mac and a PC pretty much cost the same - except the Mac is much, much sexier. So I must still advocate the Mac.

The advantages to a PC are:
1. You can choose your own hardware components (good luck if Windows understands them).
2. You have more native software, especially games. (But the Mac can run this under an emulator anyway, though admittedly games are too slow).

From a usability and features point of view for the average user, the difference is negligible; you can decide on these points above. But I want to make a different argument.

I think that actually the war will not be decided by features, but by _lack of features_. Average Joe is scared of computers, and doesn't want to have to understand them. He knows his file is "In the Microsoft", he doesn't care about what drive it's on, which subfolder, in which directory, etc. He doesn't care. That's the point. And unfortunately for Microsoft, Apple understand this point very well. So with Mac OS X 10.7 Lion, they have begun a transition away from a traditional Windows/Icons/Mouse/Pulldown-menu (WIMP) environment, to the iOS environment seen on their iPads and iPhones. In other words, I think that really, desktop computers are dead. Tablets will win. Not just because they're smaller and more convenient to carry around. They will destroy laptops as well: Because they're human-usable.

In the 1990s, Apple had a set of guidelines for programmers called the Human Interface Guidelines. Your software was expected to be user-friendly and adhere to scientifically-researched guidelines of non-confusing computer behaviour. The iPad and iPhone are very, very good at being straightforward and simple. But I bet you didn't know this: an iPhone is a full UNIX server. So is an iPad. They both run iOS, which is actually a stripped-down Mac OS X. Now, since Apple are evidently on the path to merging Mac OS X and iOS, to my mind, the decision is actually this: Do you like the iPad or iPhone? If so, then get a Mac, because it works almost the same, and, in the future, _will_ work the same. Whilst a Windows PC will still lag behind on the usability scales, and look more and more like a relic of the 1990s' complex graphical environments, the Mac will become what computers ought to be - a useful household appliance that manages all your data and entertainment.

Friday, 12 August 2011

Does philosophy still have a place in modern society?

I suppose the popular view of philosophy is that it is just "a load of out-of-date beard-stroking with no practical relevance," or that, in a best-case scenario, it's just got something to say about [the meaning of life]( But it's much more than that.

Let me start by giving a brief history of Philosophy for those who think that it's just a "way of thinking" - as in "our company's philosophy is to give good service".

The term "Philosophy" comes from Greek, meaning love of wisdom. The ancient Greeks, from around 500 BC, until their schools were banned by the Christians, dedicated their time to theorising. They dealt with all manner of topics - mathematics, physics, chemistry, biology, and general issues like politics, ethics, and truth. The style of their philosophy was formal and methodical. Now, once Western civilisation reached the "Enlightenment" period, the various topics that philosophy used to cover branched off as separate sciences. Indeed, even now, the "Chair of Philosophy" in some universities is the professorship of Physics. Physics used to be called "natural philosophy". So historically speaking, philosophy is the parent of Western knowledge.

But is it relevant today? I argue that it is. Take Karl Marx, for example. An economist, you might argue, or a politician. But in reality, his work, Das Kapital, is primarily a work of speculative philosophy. It is only nowadays that we call his work 'sociology'; the term is a modern invention. Now consider Friedrich Nietzsche. Officially a linguistics professor, he was a philosopher. But if we combine these two men, we see the history of the 20th century play out before us: Nietzsche was admired, for the wrong reasons, by Hitler. And Marx by Stalin and Lenin. So the whole future of the 20th century, from World War II all the way to the fall of the Berlin wall in 1989, was pretty much determined by the writings of two philosophers in the late 19th century.

If you don't believe this point, ask yourself, what was Jesus, if not an itinerant philosopher? His views on the world have determined the history of the over half the world's nations for 2000 years. Of course philosophy is influential, and of course it is relevant. We cannot begin to think or explain away any social movement without first referring to the philosophers who first penned its foundational beliefs. Consider Jean-Jacques Rousseau and the French Revolution. Consider Machiavelli and the Borgias of Renaissance Italy. Consider the Existentialists and Humanists, and modern socialism's concern for individuals' well-being. Consider Bentham and Mill, and the existence of modern democracy. The world would probably still be run by aristocrats if it weren't for them. Indeed, as the British Government mulls banning hoodies and Facebook due to the riots, they'd do well to read Thomas Hobbes who was quite in favour of such approaches.

But what about today, in the modern world, now that their job is putatively done? Well, let's think about it. What does modern philosophy offer? Before I begin, let me point out that there are at least four types of philosophy practiced in the modern world, of which I am only an expert in one. Specifically, they are: Eastern Philosophy, Continental, Poststructuralist, and Analytic. There was a humanist or Existentialist branch as well, but it is now considered Continental, largely. Eastern Philosophy, broadly speaking, covers the Eastern religions and their take on the world. Continental philosophy is work done by European philosophers who are neither Poststructuralist nor Analytic. Poststructuralists, sometimes called Postmodernists, are a recent variety of philosopher, owing most of their pedigree to French writers like Lacan and Foucault. Their primary concern is not with truth but with power. As such, their critical work is of great importance in sociology, psychology, and politics. I am not an expert in any of these; my area is Analytic, which is largely an Anglo-German affair.

Analytic philosophy concerns itself with correct argument structure, truth-seeking, valid and sound arguments, formal logic, and similar things. This may sound rather dry, until I explain further. This is just its method. The Analytic method also considers itself mathematically rigorous or scientific. That is, it aims to use only strictly verifiable premises or basic ideas on which to build its arguments; it dips into scientific evidence, and uses strict Boolean or computer-style logic to get its answers. Naturally, this is an idealised characterisation of it, but its style is unmistakeable. If you read a piece that concerns itself primarily with the meaning of a phrase or word, and it goes into many thought experiments, examples of use, counter-examples, and so on, you're looking at an Analytic work.

Why, now, would Analytic philosophy be relevant? Well, because of the particular arguments that it deals with by means of its specific method. Analytic philosophy is divided into four official areas: Epistemology, Metaphysics, Ethics, and Aesthetics.

Epistemology covers what we know, and tries to define good argument structure, and what counts as truth. As such, it forms the basis of Boolean logic, which is the foundation of all computers. Furthermore, in its search for truth, Epistemology does not take recourse in blunt statements like "God just does exist" or "I just have faith". No, it insists on logic and evidence. It is the basis, ultimately, of the scientific method. Most epistemologists in the analytic tradition believe that there is an independent truth, which humans can access. Some, however, known as relativists, do not believe this, and believe, rather, that truth is socially constructed. As such, epistemologists consider Poststructuralists to be a subset of relativism. This is a bit of a religious war, so I will leave it there. The point is: truth matters. Because truth informs our beliefs, and our beliefs determine our actions. If you believe you can get away with crime, you will likely commit it, for example. Do you know you will get away with it, or do you just believe it? And so on.

But now we are entering the field of Ethics. The subdivision of Ethics deals with whatever is good to do or not do. There are many positions inside ethics. Let me enumerate just a few. Relativists claim that the good is socially constructed. So, for example, they would argue that a Burqa ought to not be banned, because it is good for Muslims. However, they might be forced to defend Hitler. Absolutists, on the other hand, insist that doctrines like the Ten Commandments are the real good. Then you get pragmatists, such as William James, who argue that whatever makes sense to do, is good. Then there are consequentialists, such as J. S. Mill, who argue that the best consequences for the most people, is what we ought to do. This view, incidentally, is what really gave impetus to modern democracy. So it is not enough to say "The Bible says so". The Bible says you must stone witches to death, as well as your son, if he is disobedient. But we no longer live by those Biblical morals. Ethics provides us with the possibility of a better, more modern path to truly moral behaviour. It is of paramount importance in guiding us. Believe it or not, most modern ethics, such as "the right to privacy", originate in old considerations by philosophers. What about software piracy? Is that ethical? Music piracy? So what if politicians are corrupt? What about abortion? HIV/AIDS confidentiality? Ethics addresses these issues.

Now what about metaphysics? This deals with what exists. So, it deals with issues like what mathematics really are, what fundamental particles could really exist, and importantly, whether God exists. With the current climate of religious violence, metaphysics has an incredibly important job, in considering such a matter. Not that the vigorously devout will heed any answer from a philosopher - indeed, Colossians 2:8 says: "Beware lest any man spoil you through philosophy". But philosophy has much to say about God, and much more clearly, than any religious text. Then there are the scientists at the LHC, who are trying to find all these various particles. They could do well to chat to a philosopher. They may find that they're wasting their time in a massive quest to understand something that can be answered more simply. Or they may not. They may find that the philosopher could clarify their theoretical constructs for them. What is space-time, exactly? What is a superstring, exactly? Does it exist? And if it exists, can we use it? Could there really be multiple universes? Philosophy addresses these questions.

Lastly, Aesthetics. This is of enormous sociological significance. Is pornography, for example, beautiful? Is heavy metal music or gangster rap beautiful? To whom? Why? Can any such things be justified? Is beauty absolute for all people, or is it relative to individuals? Is it relative to societies? Is a woman in a Burqa beautiful, or a fanatic? Is a woman in a bikini beautiful or a prostitute? Are Jackson Pollock's paintings - consisting of splatters of paint and nothing more - beautiful? Should you really pay millions for one? What about Picasso? Or Dali? Is Le Corbusier's crude concrete architectural style beautiful or an eyesore? Should city planners allow it? What about piercings or tattoos? Should they be permitted or are they ugly? Are anorexic fashion models beautiful or hideous? Should the body be exposed, concealed, reviled or worshipped? Is popular music rubbish, is classical music the only true music? Or is it dusty and irrelevant?

Philosophy has much to offer. We ignore it at our peril. It shapes our societies without us realising it. Studying it is like opening your eyes after being blind all your life.

Thursday, 11 August 2011

The ET theory of the origin of life irritates me.

I find this ET life theory completely pointless - that DNA/microbes came on an asteroid to this planet and then evolved upon landing, a bit like the movie "Evolution". Even if it's true, this theory does not answer any questions.

The fact of the matter is at some stage you have to stop and say that life formed SOMEWHERE, and you have to say how. Explaining life on this planet as coming from another one, is just regressing the mystery. It's as thick as saying "god did it", because that doesn't explain where 'god' came from. So with life. Saying 'it came from another world' doesn't explain how or why it formed on that other world.

So just get it right! Explain how it could come about on this world, and the job is done!

Thursday, 4 August 2011

What do the stars hold for you?

Many people religiously consult the astrology section of their favourite newspaper, magazine or website, eagerly anticipating the good news that the stars hold for them. But do these 'predictions' amount to anything serious, or are they just a form of harmless entertainment?

Let's start with the first of the customary accusations levelled against astrology - that it's vague. If you consult your "reading" for today, and substitute the word "you" in the reading, say, for your mom's name, or your best friend's, you will probably find that the "reading" is largely accurate for them, too. Dawkins did a superficial experiment with a small sample of people - he took a reading, told people it was for their star sign, whereas it was in fact for another sign - and then asked them what they thought. Many of the people found it to be fairly accurate, except one person: the person whose sign it was. Surely if the reading were accurate, it would only ring true for the person whose sign it was?

But does astrology even pretend to be a form of prediction? Well, unfortunately, yes. The system was originally based on the observation that regular human events seemed to correlate with observed celestial events, and so when these celestial events recurred, the human events were expected to recur. But astrologers argue that there is more to those 'predictions' that they make. A true astrologer argues that these 'predictions' are actually an indication of possibilities or potentials, likelihoods or probabilities. Astrology is more of a mapping system, which correlates stars and planets to personality types, tendencies, upbringing, the way you will probably be in a relationship, your potential for money earning, and so on. One can look at it in the same way that one would look at a psychological profile. So persons with specific celestial mappings, which can be quite unique, could be said to have certain patterns happen in their lives. Astrology is not, moreover, just a simple matter of the star signs determining the personality. There are other aspects that have to be taken into account, such as planets. A more accurate, true prediction or characterisation, could only be drawn from consulting a Birth Chart. This alone, then, could give an indication of the kind of life you could expect, or events that are likely to happen. Astrology is not, therefore, predictive in the sense that it tries to give precise descriptions of forthcoming events. Rather, it just gives tendencies of your personality, and thus, the kinds of things that are likely to happen to you. The predictions one sees in the newspapers are certainly not meant to be accurate, a true astrologer will argue, because they do not take all the factors into account, such as the relevant planets, your family, and so on. They are very broad, at best. They are more for entertainment purposes.

But how could astrology be accurate at all? What about a case of two people with the same star sign, who have radically different fates and personalities? My stepfathers shared a birthday, but you could not imagine two people with such different fates and personalities. How is that possible in the light of Astrology's claims? Well, the astrologer answers, this case would be one in which the ascendant planets were very relevant, and explained the difference. Suppose we accept that reply. But then what about the case of twins? Twins do not often share the same fate or personality. Yet they should always have identical fates, if astrology were true.

Suppose, now, that astrology admits that it has some predictive tendencies, given that it lavishly tells you in the newspapers what is going to transpire on any particular day. How accurate are these 'predictions'? In the scientific arena, we consider a prediction accurate only if it gives precise details. Scientists discard any theory that does not predict accurately. Remember, when you step on a plane, that you're putting yourself in the hands of [Bernoulli's principle]( It predicts very accurately, statistically speaking. How accurate, by comparison, are astrology's predictions? When they say that you are going to "have difficulties with money today," why do they not say "you will lose exactly £10 out of your wallet at this exact address..." They ought to, if astrology were remotely a science.

Now, let's look at the method of astrology. Astrology is not, contrary to what most people assume, a question of which constellation was in the sky at the time of your birth. It also involves the relative positioning of planets and the moon as well as stars and sun, as we’ve mentioned. The time of birth has a big impact, not just the day of birth. There are planetary alignments to consider as well. As such, astrology is a sophisticated system. But the important question is this: do astrologers use telescopes? If not, they cannot possibly obtain an accurate reading - because if they did use telescopes, they'd notice that the constellations that they're expecting to see, are not actually the dominant ones at that point in time. Since Ptolemy first devised our current system, the stars have shifted about 23 degrees. That's an entire star sign! If astrologers bothered to use telescopes, they'd have noticed this. But astrologers use charts, not telescopes. Usually the Earth is central on the chart, and the Sun is a mere planet that orbits the Earth. We last gave the geocentric model of the cosmos credence hundreds of years ago; we now know it to be false.

Third, let's think about the mechanism by which the stars could influence us. Traditionally, no specific energy or causal mechanism is stipulated as the reason for the correlations between star signs and personality or fate; they are merely observed correlations lacking an explanation. So how do the stars and planets influence us? Could it be by means of light? Well, that won't work, because the planets are so dim that the light coming from them will have less influence on us than an LED on your computer screen. That's right - if you're pregnant, and light is the way that the stars influence us, then you're messing with your unborn baby's future by sitting near any artificial light source.

If, however, it's not light, then maybe it's gravity. Well, anyone who's done Physics will recognise the equation F = G(M1M2)/r2. This equation measures the force of gravity between two objects. Let's take an example. Let's try the influence of Jupiter. I don't want to prejudice this by using, say, a star, because stars are much further, and therefore their influence is less. The mass of Jupiter is 1.8986×1027 kg - roughly two billion billion billion kilograms. Let's say a newborn's mass is 3kg (and G is a very small constant). The distance between the baby and Jupiter is between 893 billion and 964 billion metres apart, depending of the positions of the two planets in their orbits. If we do the calculation, it gives us a force of 0.0000004088147 Newtons. For the case of distant stars, it's much worse, since they're thousands to millions of times further away. Now, just so that you understand how weak this force is, the force that 1 Lb of mass exerts on earth is 4.48 N. The force Jupiter exerts on us here on Earth, therefore, is about ten million times less than a 1 Lb weight. No chance that that could mess with your fate; the gravitational field of your mother probably has more influence.

But perhaps the stars exert their influence on us by means of a yet-unknown force. Perhaps there's a mysterious force in the universe - let's call it the Force - that influences us. And let's say that the Force is much stronger than gravity, and therefore, can reach us from these stars and planets, and influence us. Surely, if the Force is strong enough to reach us from planets that are billions of miles away, and surely if it is strong enough to exert a fatalistic or deterministic force on our minds and bodies, it could be detected and measured? Surely, by now, we'd have noticed strange things happening, and developed a way to measure it? We can measure very small things indeed - such as the force of gravity, the fields of particles, and so on. So can we not suppose that something as powerful as this Force would set of alarms in laboratories world-over? And surely the Earth would also have this Force, and completely overrule the other planets purely on the basis of its proximity? You can't debate this one; all forces diminish in strength over distance, as the formula above illustrates. This means that the Earth must be millions of times more dominant or ascendant for everyone in their Birth Chart.

It has been found that there are seasonal effects on personality (google this), but remember that unlike the stars, seasons vary between hemispheres; so even if astrological predictions about personality worked for the northern hemisphere, they’d be completely opposite for the southern.

Lastly, why should the stars at the _time of birth_ be relevant? Does it not make sense to suppose that the measurement should be from the time of _conception_? Logically, the stars and planets cannot exert a fatalistic influence on an already-existing being. Is this not the key reason why we talk of someone being a Scorpio or an Aries - because they were influenced by those constellations _at the time of their birth_? But that doesn't make sense. Think about it. If a person, who has already existed for nine months, can be influenced by stars just because he or she happens to emerge out of a warm damp container at that time, then, anyone who gets out of a heated swimming pool is at risk of having their destiny seriously messed with by prevailing constellations at the time. If astrology were true, the Force would influence us at conception, not birth. But an advocate of astrology may have an answer here. Perhaps it is dated from the time of birth because this is the point in time in which forces and events start to come into play in your life, because you are no longer in the safety of the womb. This answer would be a good one if it wasn't well-known that babies are influenced in utero by what the mother does, environmental sounds, and so on. Moreover, how would some distant stars and planets just happen to "know" when you emerged into the world and therefore that they must now start influencing you? Surely they're emanating their Force all the time, regardless of whether you've been born or not? Their influence must start at conception, not birth. The sign at your birth is irrelevant.

I must conclude that astrology is nonsense. But why should I spoil peoples' fun? For a number of reasons. Firstly, there's the self-fulfilling prophecy problem. It is possible that people consulting an astrological reading might subconsciously _act it out_. Someone might read, for example, that they're going to get very bad news that day, and go about the whole day unconsciously doing stupid things because they're so stressed about what the 'bad thing' might turn out to be. Secondly, astrology is part of a superstitious world-view, one that doesn't connect observed facts to theories by an explanatory causal mechanism. Astrology offers no causal link or explanation at all for why "Scorpios" are "belligerent" or "Taureans" are "stubborn". This world-view can cause harm. Think of how astrology encourages stereotyping and unfair treatment - especially when it comes to dating (“Oh I only date Sagittarians, I’m incompatible with Leos”). Imagine if a newspaper wrote articles generalising about a race or nation of people? That paper would be sued for racism. So why is it OK to typecast and stereotype people on the basis of a completely unscientific, unexplained system like astrology? Some people defend astrology as a kind of predecessor of Psychology, as a kind of theory of personality. But Psychology bases its theories on observed behaviour of persons. It does not _prescribe_ behaviour _to_ persons on the basis of their birthdate. ("Oh, you're a Taurus, so you're stubborn". No free choice in the matter at all).

I think it's written in the stars that astrology's days are numbered.