Q: Digital intelligence better than Biological intelligence?
A: In a digital computer is designed so you can tell it exactly what to do. It will do exactly what you tell it. even when its learning stuff two different digital computers can do exactly the same thing with the same learning knowledge. That means that, you could make ten thousand copies of the same knowledge, have them all running in different computers, whenever one copy learns something, it can communicate it very efficiently to all the other copies, so you can have ten thousand digital agents out there, like a hived mind and they can share knowledge extremely efficiently by just sharing connection strengths inside the neural nets. We can’t do that. If you learn something and you want to tell me about it, you have to use sentences or pictures, you can only share very limited amount of information that way, so its much much slower for you to communicate to what you have learnt to me than this digital intelligence to communicate stuff, that makes them much better, they can learn a whole lot of stuff between them.
―
The Godfather in Conversation: Why Geoffrey Hinton is worried about the future of AI
I have made Prof Hinton’s public talks my guru. This is the only understanding have though keep reading and learning about Artificial intelligence. Reading and listening is my way of learning. I wanted to add programming to mix but that got aborted. I need a better time. I keep asking CHAT GPT questions and generally am impressed with the responses. Here is my conversation with Chat GPT today:
ChatGPTGood morning! I can’t provide real-time information, but if the 2024 Indian elections have occurred, you might want to check reliable news sources for the latest updates and analysis on the election results. Is there anything else you’re curious about regarding elections or any other topic?YouChatGPTBeing a generalist in your career can have both advantages and disadvantages, depending on various factors such as your personal preferences, industry trends, and job market demands. Here are some points to consider:YouChatGPTProfessions that value versatility, adaptability, and a broad skill set are often well-suited for generalists. Here are some examples of professions where being a generalist can be advantageous:YouChatGPTYou’re welcome! If you have any more questions or need further assistance, feel free to ask. Have a great day!
The latest wave of AI relies heavily on machine learning, in which software identifies patterns in data on its own, without being given any predetermined rules as to how to organize or classify the information. These patterns can be inscrutable to humans. The most advanced machine-learning systems use neural networks: software inspired by the architecture of the brain. They simulate layers of neurons, which transform information as it passes from layer to layer. As in human brains, these networks strengthen and weaken neural connections as they learn, but it’s hard to see why certain connections are affected. As a result, researchers often talk about AI as ‘black boxes’, the inner workings of which are a mystery.In the face of this difficulty, researchers have turned to the field of explainable AI (XAI), expanding its inventory of tricks and tools to help reverse-engineer AI systems. Standard methods include, for example, highlighting the parts of an image that led an algorithm to label it as a cat, or getting software to build a simple ‘decision tree’ that approximates an AI’s behaviour. This helps to show why, for instance, the AI recommended that a prisoner be paroled or came up with a particular medical diagnosis. These efforts to peer inside the black box have met with some success, but XAI is still very much a work in progress.The problem is especially acute for largelanguage models (LLMs), the machine-learning programs that power chatbots such as ChatGPT. These AIs have proved to be particularly inexplicable, in part because of their size. LLMs can have hundreds of billions of ‘parameters’, the variables that the AI uses internally to make decisions. XAI has “rapidly grown in the past few years, especially since LLMs have started to emerge”, says Mor Geva, a computer scientist at Tel Aviv University in Israel.These inscrutable models are now taking on important tasks. People are using LLMs to seek medical advice, write computer code, summarize the news, draft academic papers and much more. Yet it is well known that such models can generate misinformation, perpetuate social stereotypes and leak private information.For these reasons, XAI tools are being devised to explain the workings of LLMs. Researchers want explanations so that they can create safer, more efficient and more accurate AI. Users want explanations so that they know when to trust a chatbot’s output. And regulators want explanations so that they know what AI guard rails to put in place. Martin Wattenberg, a computer scientist at Harvard University in Cambridge, Massachusetts, says that understanding the behaviour of LLMs could even help us to grasp what goes on inside our own heads.― Nature, How does ChatGPT ‘think’? Psychology and neuroscience crack open AI large language models by Matthew Hutson