ChatGPT struggles with prime numbers!
While trying to understand how the public private key cryptographic architecture works using chatGPT, I asked what is a coprime number after chatgpt explained its use in the said architecture. Here is the response in all its glory:
Two numbers are said to be coprime (or relatively prime) if they have no common prime factors other than 1. In other words, their greatest common divisor (GCD) is 1.
For example:
- 8 and 15 are coprime because their only common factor is 1.
- 14 and 15 are coprime because they share no common factors other than 1.
- 10 and 21 are not coprime because they both have 1 as a common factor.
Looking at the first and then the last example which is supposed to be a counter-example as to what is not a coprime number, we can clearly see that chatgpt gives the same reason for each case i.e they only have 1 as a common factor. After further investigation, chatgpt replies:
So he changed his answer that 10 & 21 are coprime, but then I asked if 'it' knows if 3 and 6 are coprime, here is the response once again:
It is contradicting itself, initially 3 and 6 are not coprime, it also knows that the greatest common divisor has to be 1 for co prime numbers, it also knows the factors of the numbers correctly but it then suggests that 3 and 6 are coprime simple because they have 1 as common factor.
As I wrote this blog, initially I considered chatgpt as a 'he' which many people probably assume it as, but after seeing the contradictory statements, it is nothing but a machine making the same mistake over and over. 3 and 6 are not coprime because they have a factor of 3 which is the greatest common divisor.
Rest assured, if your job or any task revolves around doing the same thing over and over then it can and will be replaced by a machine. Something a machine does very well is repetition whether the instruction is right or wrong is up to the human instructing the machine.
Even today, AI is unable to think. The only difference today is that a human being is not involved in writing explicit instructions to a machine but providing implicit instructions. In the old days, you had a programmer defining the input, instructions and the output. That has not changed for a computer as it still needs them, but for the programmer, instead of instructions, example input and outputs are provided directly to a computer, instructions are given to the computer to then map the input to the output with a very large number of parameters. Further instructions are given to the computer to tweak the parameters that start from random numbers and eventually provide an output matching the example values if such numbers can be found at all.
The above is only possible due to the explosive growth of memory devices, 512 Mb RAM on pentium 4 was very powerful in 2002, now you have 128GB+ RAM on servers that hold the millions of parameters for thousands of input-output example pairs. In the past, you needed a human being to choose the parameters, tweak them, make decisions which were important or not and squeeze them into tiny storage media.
AI can easily do something like doublethink and this is the beauty of the novel 1984 by George Orwell, we are getting closer and closer to that dystopia.
ReplyDelete