Humans use symbols to record, process and transmit information. This unit introduces binary digits as the symbols computers use to perform these tasks with a focus on the representation of text and numbers.
The smallest unit of data in computing. It is represented by a 0 or a 1.
Binary is a number system that only uses two digits: 1 and 0. The binary system is known as a 'base 2' system.
The basic unit of information in computer storage and processing. A byte consists of 8 adjacent binary digits (bits), each of which consists of a 0 or 1.
Associates each character with a sequence of symbols.
Decimal is a term that describes the base-10 number system. The decimal number system consists of ten single- digit numbers: 0, 1, 2, 3, 4, 5, 6, 7, 8, and 9.
The conversion of an encoded format back into the original sequence of characters.
The process of putting a sequence of characters into a specialised format for efficient transmission or storage.
The number base specifies how many digits are used (Including Zero).
Refers to the form in which data is stored, processed, and transmitted.
The number of symbols that a representation contains.
This unit takes learners on a tour through the different layers of computing systems: from programs and the operating system, to the physical components that store and execute these programs, to the fundamental binary building blocks that these components consist of. Learners will attempt to define the term ‘artificial intelligence’, and explore the kinds of problems that it has traditionally dealt with. They will also focus on machine learning, and investigate its relationship with conventional programming, and the ethical considerations that are tied into building any system that makes decisions.
Data which is inserted into a system.
Data which is sent out of a system.
The part of a computer that stores data.
Has the electronic circuitry that manipulates input data into the information people want.
Using a set of techniques and approaches to help to solve complex problems, so that they can be understood by a human or a machine.
An electronic device that takes input, processes data, and outputs a result according to the instructions of a variable program. It may also communicate, and store processed data.
Units of information (integers, characters and Boolean).
The software that manages the hardware and software resources in a computer system.
Sequences of instructions for a computer.
AI—any system that performs tasks that typically require intelligence in humans.
Programming computers to learn from experience.
Any parts of a computer system that aren't physical (programs, applications, data)
The physical parts of a computer system, e.g. a graphics card, hard disk drive
Circuit components which take one or more inputs, compare the inputs with each other, and provide a single output based on logical functions such as AND, OR and NOT
A data type in computing which only has two possible values, true or false.
Used to assess possible results of a Boolean algebra statement.
Both input A AND Input B have to be 1 (or ON) in order for the output to be 1
Either input A OR input B has to be 1 (or ON) in order for the output to be 1
The output of A (1 for ON or 0 for OFF) is opposite the value of the input
This unit introduces learners to text-based programming with Python. The lessons form a journey that starts with simple programs involving input and output, and gradually moves on through arithmetic operations, randomness, selection, and iteration.
a set of step-by-step instructions to solve a problem
a sequence of instructions to perform tasks with computer
rules governing how to write statements in a programming language
An error in a programming language caused by not using the correct syntax. These are normally spelling errors or small grammatical mistakes.
a fault in the logic or structure of the problem
a language used by a programmer to write a piece of software
a high-level programming language
the format in which a variable or constant holds data, such as ‘integer’ or ‘string’
an error in a program
the process of finding and correcting programming errors
adding one or more sentences to explain the purpose of a section of code
to run a computer program
A sequence of characters often stored as a variable in a computer program. These characters can include numbers, letters and symbols.
a data type in computing which only has two possible values, true or false
A memory location within a computer program where values are stored.