What is a program that seems to perform one function while actually doing something else?

This page provides an overview of the most common malware applications. For specific steps you can take to protect against malware, see our Protect Against Viruses & Security Threats pages.

What is Malware?

Malware is a catch-all term for various malicious software, including viruses, adware, spyware, browser hijacking software, and fake security software.

Once installed on your computer, these programs can seriously affect your privacy and your computer's security. For example, malware is known for relaying personal information to advertisers and other third parties without user consent. Some programs are also known for containing worms and viruses that cause a great deal of computer damage.

Types of Malware

  • Viruses which are the most commonly-known form of malware and potentially the most destructive. They can do anything from erasing the data on your computer to hijacking your computer to attack other systems, send spam, or host and share illegal content.
  • Spyware collects your personal information and passes it on to interested third parties without your knowledge or consent. Spyware is also known for installing Trojan viruses.
  • Adware displays pop-up advertisements when you are online.
  • Fake security software poses as legitimate software to trick you into opening your system to further infection, providing personal information, or paying for unnecessary or even damaging "clean ups".
  • Browser hijacking software changes your browser settings [such as your home page and toolbars], displays pop-up ads and creates new desktop shortcuts. It can also relay your personal preferences to interested third parties.

Facts about Malware

Malware is often bundled with other software and may be installed without your knowledge.
For instance, AOL Instant Messenger comes with WildTangent, a documented malware program. Some peer-to-peer [P2P] applications, such as KaZaA, Gnutella, and LimeWire also bundle spyware and adware. While End User License Agreements [EULA] usually include information about additional programs, some malware is automatically installed, without notification or user consent.

Malware is very difficult to remove.
Malware programs can seldom be uninstalled by conventional means. In addition, they ‘hide’ in unexpected places on your computer [e.g., hidden folders or system files], making their removal complicated and time-consuming. In some cases, you may have to reinstall your operating system to get rid of the infection completely.

Malware threatens your privacy.
Malware programs are known for gathering personal information and relaying it to advertisers and other third parties. The information most typically collected includes your browsing and shopping habits, your computer's IP address, or your identification information.

Malware threatens your computer’s security.
Some types of malware contain files commonly identified as Trojan viruses. Others leave your computer vulnerable to viruses. Regardless of type, malware is notorious for being at the root, whether directly or indirectly, of virus infection, causing conflicts with legitimate software and compromising the security of any operating system, Windows or Macintosh.

How do I know if I have Malware on my computer?

Common symptoms include:

Browser crashes & instabilities

  • Browser closes unexpectedly or stops responding.
  • The home page changes to a different website and cannot be reset.
  • New toolbars are added to the browser.
  • Clicking a link does not work or you are redirected to an unrelated website.

Poor system performance

  • Internet connection stops unexpectedly.
  • Computer stops responding or takes longer to start.
  • Applications do not open or are blocked from downloading updates [especially security programs].
  • New icons are added to desktop or suspicious programs are installed.
  • Certain system settings or configuration options become unavailable.

Advertising

  • Ads pop up even when the browser is not open.
  • Browser opens automatically to display ads.
  • New pages open in browser to display ads.
  • Search results pages display only ads.

A computer program is a sequence or set of instructions in a programming language for a computer to execute. Computer programs are one component of software, which also includes documentation and other intangible components.[1]

A computer program in its human-readable form is called source code. Source code needs another computer program to execute because computers can only execute their native machine instructions. Therefore, source code may be translated to machine instructions using the language's compiler. [Machine language programs are translated using an assembler.] The resulting file is called an executable. Alternatively, source code may execute within the language's interpreter.[2]

If the executable is requested for execution, then the operating system loads it into memory and starts a process.[3] The central processing unit will soon switch to this process so it can fetch, decode, and then execute each machine instruction.[4]

If the source code is requested for execution, then the operating system loads the corresponding interpreter into memory and starts a process. The interpreter then loads the source code into memory to translate and execute each statement.[2] Running the source code is slower than running an executable. Moreover, the interpreter must be installed on the computer.

Example computer program[edit]

The "Hello, World!" program is used to illustrate a language's basic syntax. The syntax of the interpreted language Basic [1964] was intentionally limited to make the language easy to learn.[5] For example, variables are not declared before being used.[6] Also, variables are automatically initialized to zero.[6] Here is an example computer program, in Basic, to average a list of numbers:[7]

10 INPUT "How many numbers to average?", A
20 FOR I = 1 TO A
30 INPUT "Enter number:", B
40 LET C = C + B
50 NEXT I
60 LET D = C/A
70 PRINT "The average is", D
80 END

Once the mechanics of basic computer programming are learned, more sophisticated and powerful languages are available to build large computer systems.[8]

History[edit]

Improvements in software development are the result of improvements in computer hardware. At each stage in hardware's history, the task of computer programming changed dramatically.

Analytical Engine[edit]

Lovelace's description from Note G.

In 1837, Charles Babbage was inspired by Jacquard's loom to attempt to build the Analytical Engine.[9] The names of the components of the calculating device were borrowed from the textile industry. In the textile industry, yarn was brought from the store to be milled. The device had a "store" which was memory to hold 1,000 numbers of 50 decimal digits each.[10] Numbers from the "store" were transferred to the "mill" for processing. It was programmed using two sets of perforated cards. One set directed the operation and the other set inputted the variables.[9] [11] However, after more than 17,000 pounds of the British government's money, the thousands of cogged wheels and gears never fully worked together.[12]

Ada Lovelace worked for Charles Babbage to create a description of the Analytical Engine [1843].[13] The description contained Note G which completely detailed a method for calculating Bernoulli numbers using the Analytical Engine. This note is recognized by some historians as the world's first computer program.[12]

Universal Turing machine[edit]

In 1936, Alan Turing introduced the Universal Turing machine, a theoretical device that can model every computation.[14] It is a finite-state machine that has an infinitely long read/write tape. The machine can move the tape back and forth, changing its contents as it performs an algorithm. The machine starts in the initial state, goes through a sequence of steps, and halts when it encounters the halt state.[15] All present-day computers are Turing complete.[16]

ENIAC[edit]

Glenn A. Beck is changing a tube in ENIAC.

The Electronic Numerical Integrator And Computer [ENIAC] was built between July 1943 and Fall 1945. It was a Turing complete, general-purpose computer that used 17,468 vacuum tubes to create the circuits. At its core, it was a series of Pascalines wired together.[17] Its 40 units weighed 30 tons, occupied 1,800 square feet [167 m2], and consumed $650 per hour [in 1940s currency] in electricity when idle.[17] It had 20 base-10 accumulators. Programming the ENIAC took up to two months.[17] Three function tables were on wheels and needed to be rolled to fixed function panels. Function tables were connected to function panels by plugging heavy black cables into plugboards. Each function table had 728 rotating knobs. Programming the ENIAC also involved setting some of the 3,000 switches. Debugging a program took a week.[18] It ran from 1947 until 1955 at Aberdeen Proving Ground, calculating hydrogen bomb parameters, predicting weather patterns, and producing firing tables to aim artillery guns.[19]

Stored-program computers[edit]

Instead of plugging in cords and turning switches, a stored-program computer loads its instructions into memory just like it loads its data into memory.[20] As a result, the computer could be programmed quickly and perform calculations at very fast speeds.[21] Presper Eckert and John Mauchly built the ENIAC. The two engineers introduced the stored-program concept in a three-page memo dated February 1944.[22] Later, in September 1944, Dr. John von Neumann began working on the ENIAC project. On June 30, 1945, von Neumann published the First Draft of a Report on the EDVAC which equated the structures of the computer with the structures of the human brain.[21] The design became known as the von Neumann architecture. The architecture was simultaneously deployed in the constructions of the EDVAC and EDSAC computers in 1949.[23]

The IBM System/360 [1964] was a line of six computers, each having the same instruction set architecture. The Model 30 was the smallest and least expensive. Customers could upgrade and retain the same application software.[24] The Model 75 was the most premium. Each System/360 model featured multiprogramming[24]—having multiple processes in memory at once. When one process was waiting for input/output, another could compute.

IBM planned for each model to be programmed using PL/1.[25] A committee was formed that included COBOL, Fortran and ALGOL programmers. The purpose was to develop a language that was comprehensive, easy to use, extendible, and would replace Cobol and Fortran.[25] The result was a large and complex language that took a long time to compile.[26]

Computers manufactured until the 1970s had front-panel switches for manual programming.[27] The computer program was written on paper for reference. An instruction was represented by a configuration of on/off settings. After setting the configuration, an execute button was pressed. This process was then repeated. Computer programs also were automatically inputted via paper tape or punched cards. After the medium was loaded, the starting address was set via switches, and the execute button was pressed.[27]

Very Large Scale Integration[edit]

A VLSI integrated-circuit die.

A major milestone in software development was the invention of the Very Large Scale Integration [VLSI] circuit [1964].[28] Following World War II, tube based technology was replaced with point-contact transistors [1947] and bipolar junction transistors [late 1950s] mounted on a circuit board.[28] During the 1960s, the aerospace industry replaced the circuit board with an integrated circuit chip.[28]

Robert Noyce, co-founder of Fairchild Semiconductor [1957] and Intel [1968], achieved a technological improvement to refine the production of field-effect transistors [1963].[29] The goal is to alter the electrical resistivity and conductivity of a semiconductor junction. First, naturally occurring silicate minerals are converted into polysilicon rods using the Siemens process.[30] The Czochralski process then converts the rods into a monocrystalline silicon, boule crystal.[31] The crystal is then thinly sliced to form a wafer substrate. The planar process of photolithography then integrates unipolar transistors, capacitors, diodes, and resistors onto the wafer to build a matrix of metal–oxide–semiconductor [MOS] transistors.[32][33] The MOS transistor is the primary component in integrated circuit chips.[29]

Originally, integrated circuit chips had their function set during manufacturing. During the 1960s, controlling the electrical flow migrated to programming a matrix of read-only memory [ROM]. The matrix resembled a two dimensional array of fuses.[28] The process to embed instructions onto the matrix was to burn out the unneeded connections.[28] There were so many connections, firmware programmers wrote a computer program on another chip to oversee the burning.[28] The technology became known as Programmable ROM. In 1971, Intel installed the computer program onto the chip and named it the Intel 4004 microprocessor.[34]

IBM's System/360 [1964] CPU wasn't a microprocessor.

The terms microprocessor and central processing unit [CPU] are now used interchangeably. However, CPUs predate microprocessors. For example, the IBM System/360 [1964] had a CPU made from circuit boards containing discrete components on ceramic substrates.[35]

Sac State 8008[edit]

Artist's depiction of Sacramento State University's Intel 8008 microcomputer [1972].

The Intel 4004 [1971] was a 4-bit microprocessor designed to run the Busicom calculator. Five months after its release, Intel released the Intel 8008, an 8-bit microprocessor. Bill Pentz led a team at Sacramento State to build the first microcomputer using the Intel 8008: the Sac State 8008 [1972].[36] Its purpose was to store patient medical records. The computer supported a disk operating system to run a Memorex, 3-megabyte, hard disk drive.[28] It had a color display and keyboard that was packaged in a single console. The disk operating system was programmed using IBM's Basic Assembly Language [BAL]. The medical records application was programmed using a BASIC interpreter.[28] However, the computer was an evolutionary dead-end because it was extremely expensive. Also, it was built at a public university lab for a specific purpose.[36] Nonetheless, the project contributed to the development of the Intel 8080 [1974] instruction set.[28]

x86 series[edit]

In 1978, the modern software development environment began when Intel upgraded the Intel 8080 to the Intel 8086. Intel simplified the Intel 8086 to manufacture the cheaper Intel 8088.[37] IBM embraced the Intel 8088 when they entered the personal computer market [1981]. As consumer demand for personal computers increased, so did Intel's microprocessor development. The succession of development is known as the x86 series. The x86 assembly language is a family of backward-compatible machine instructions. Machine instructions created in earlier microprocessors were retained throughout microprocessor upgrades. This enabled consumers to purchase new computers without having to purchase new application software. The major categories of instructions are:[a]

  • Memory instructions to set and access numbers and strings in random-access memory.
  • Integer arithmetic logic unit [ALU] instructions to perform the primary arithmetic operations on integers.
  • Floating point ALU instructions to perform the primary arithmetic operations on real numbers.
  • Call stack instructions to push and pop words needed to allocate memory and interface with functions.
  • Single instruction, multiple data [SIMD] instructions[b] to increase speed when multiple processors are available to perform the same algorithm on an array of data.

Changing programming environment[edit]

VLSI circuits enabled the programming environment to advance from a computer terminal [until the 1990s] to a graphical user interface [GUI] computer. Computer terminals limited programmers to a single shell running in a command-line environment. During the 1970s, full-screen source code editing became possible through a text-based user interface. Regardless of the technology available, the goal is to program in a programming language.

Programming paradigms and languages[edit]

Programming language features exist to provide building blocks to be combined to express programming ideals.[38] Ideally, a programming language should:[38]

  • express ideas directly in the code.
  • express independent ideas independently.
  • express relationships among ideas directly in the code.
  • combine ideas freely.
  • combine ideas only where combinations make sense.
  • express simple ideas simply.

The programming style of a programming language to provide these building blocks may be categorized into programming paradigms.[39] For example, different paradigms may differentiate:[39]

  • procedural languages, functional languages, and logical languages.
  • different levels of data abstration.
  • different levels of class hierarchy.
  • different levels of input datatypes, as in container types and generic programming.

Each of these programming styles has contributed to the synthesis of different programming languages.[39]

A programming language is a set of keywords, symbols, identifiers, and rules by which programmers can communicate instructions to the computer.[40] They follow a set of rules called a syntax.[40]

  • Keywords are reserved words to form declarations and statements.
  • Symbols are characters to form operations, assignments, control flow, and delimiters.
  • Identifiers are words created by programmers to form constants, variable names, structure names, and function names.
  • Syntax Rules are defined in the Backus–Naur form.

Programming languages get their basis from formal languages.[41] The purpose of defining a solution in terms of its formal language is to generate an algorithm to solve the underlining problem.[41] An algorithm is a sequence of simple instructions that solve a problem.[42]

Generations of programming language[edit]

The evolution of programming language began when the EDSAC [1949] used the first stored computer program in its von Neumann architecture.[43] Programming the EDSAC was in the first generation of programming language.

  • The first generation of programming language is machine language.[44] Machine language requires the programmer to enter instructions using instruction numbers called machine code. For example, the ADD operation on the PDP-11 has instruction number 24576.[45]
  • The second generation of programming language is assembly language.[44] Assembly language allows the programmer to use mnemonic instructions instead of remembering instruction numbers. An assembler translates each assembly language mnemonic into its machine language number. For example, on the PDP-11, the operation 24576 can be referenced as ADD in the source code.[45] The four basic arithmetic operations have assembly instructions like ADD, SUB, MUL, and DIV.[45] Computers also have instructions like DW [Define Word] to reserve memory cells. Then the MOV instruction can copy integers between registers and memory.
  • The basic structure of an assembly language statement is label, operation, operand, and comment.[46]
  • Labels allow the programmer to work with variable names. The assembler will later translate labels into physical memory addresses.
  • Operations allow the programmer to work with mnemonics. The assembler will later translate mnemonics into instruction numbers.
  • Operands tell the assembler which data the operation will process.
  • Comments allow the programmer to articulate a narrative because the instructions alone are vague.
The key characteristic of an assembly language program is it forms a one-to-one mapping to its corresponding machine language target.[47]
  • The third generation of programming language uses compilers and interpreters to execute computer programs. The distinguishing feature of a third generation language is its independence from a particular hardware.[48] Early languages include Fortran [1958], COBOL [1959], ALGOL [1960], and BASIC [1964].[44] In 1973, the C programming language emerged as a high-level language that produced efficient machine language instructions.[49] Whereas third-generation languages historically generated many machine instructions for each statement,[50] C has statements that may generate a single machine instruction.[c] Moreover, an optimizing compiler might overrule the programmer and produce fewer machine instructions than statements. Today, an entire paradigm of languages fill the imperative, third generation spectrum.
  • The fourth generation of programming language emphasizes what output results are desired, rather than how programming statements should be constructed.[44] Declarative languages attempt to limit side effects and allow programmers to write code with relatively few errors.[44] One popular fourth generation language is called Structured Query Language [SQL].[44] Database developers no longer need to process each database record one at a time. Also, a simple instruction can generate output records without having to understand how it's retrieved.

Imperative languages[edit]

A computer program written in an imperative language

Imperative languages specify a sequential algorithm using declarations, expressions, and statements:[51]

  • A declaration introduces a variable name to the computer program and assigns it to a datatype[52] – for example: var x: integer;
  • An expression yields a value – for example: 2 + 2 yields 4
  • A statement might assign an expression to a variable or use the value of a variable to alter the program's control flow – for example: x := 2 + 2; if x = 4 then do_something[];

Fortran[edit]

FORTRAN [1958] was unveiled as "The IBM Mathematical FORmula TRANslating system." It was designed for scientific calculations, without string handling facilities. Along with declarations, expressions, and statements, it supported:

  • arrays.
  • subroutines.
  • "do" loops.

It succeeded because:

  • programming and debugging costs were below computer running costs.
  • it was supported by IBM.
  • applications at the time were scientific.[53]

However, non IBM vendors also wrote Fortran compilers, but with a syntax that would likely fail IBM's compiler.[53] The American National Standards Institute [ANSI] developed the first Fortran standard in 1966. In 1978, Fortran 77 became the standard until 1991. Fortran 90 supports:

  • records.
  • pointers to arrays.

COBOL[edit]

COBOL [1959] stands for "COmmon Business Oriented Language." Fortran manipulated symbols. It was soon realized that symbols didn't need to be numbers, so strings were introduced.[54] The US Department of Defense influenced COBOL's development, with Grace Hopper being a major contributor. The statements were English-like and verbose. The goal was to design a language so managers could read the programs. However, the lack of structured statements hindered this goal.[55]

COBOL's development was tightly controlled, so dialects didn't emerge to require ANSI standards. As a consequence, it wasn't changed for 15 years until 1974. The 1990s version did make consequential changes, like object-oriented programming.[55]

Algol[edit]

ALGOL [1960] stands for "ALGOrithmic Language." It had a profound influence on programming language design.[56] Emerging from a committee of European and American programming language experts, it used standard mathematical notation and had a readable structured design. Algol was first to define its syntax using the Backus–Naur form.[56] This led to syntax-directed compilers. It added features like:

  • block structure, where variables were local to their block.
  • arrays with variable bounds.
  • "for" loops.
  • functions.
  • recursion.[56]

Algol's direct descendants include Pascal, Modula-2, Ada, Delphi and Oberon on one branch. On another branch there's C, C++ and Java.[56]

Basic[edit]

BASIC [1964] stands for "Beginner's All Purpose Symbolic Instruction Code." It was developed at Dartmouth College for all of their students to learn.[7] If a student didn't go on to a more powerful language, the student would still remember Basic.[7] A Basic interpreter was installed in the microcomputers manufactured in the late 1970s. As the microcomputer industry grew, so did the language.[7]

Basic pioneered the interactive session.[7] It offered operating system commands within its environment:

  • The 'new' command created an empty slate.
  • Statements evaluated immediately.
  • Statements could be programmed by preceding them with a line number.
  • The 'list' command displayed the program.
  • The 'run' command executed the program.

However, the Basic syntax was too simple for large programs.[7] Recent dialects added structure and object-oriented extensions. Microsoft's Visual Basic is still widely used and produces a graphical user interface.[6]

C[edit]

C programming language [1973] got its name because the language BCPL was replaced with B, and AT&T Bell Labs called the next version "C." Its purpose was to write the UNIX operating system.[49] C is a relatively small language, making it easy to write compilers. Its growth mirrored the hardware growth in the 1980s.[49] Its growth also was because it has the facilities of assembly language, but uses a high-level syntax. It added advanced features like:

  • inline assembler.
  • arithmetic on pointers.
  • pointers to functions.
  • bit operations.
  • freely combining complex operators.[49]

C allows the programmer to control which region of memory data is to be stored. Global variables and static variables require the fewest clock cycles to store. The stack is automatically used for the standard variable declarations. Heap memory is returned to a pointer variable from the malloc[] function.

  • The global and static data region is located just above the program region. [The program region is technically called the text region. It's where machine instructions are stored.]
  • The global and static data region is technically two regions.[57] One region is called the initialized data segment, where variables declared with default values are stored. The other region is called the block started by segment, where variables declared without default values are stored.
  • Variables stored in the global and static data region have their addresses set at compile-time. They retain their values throughout the life of the process.
  • The global and static region stores the global variables that are declared on top of [outside] the main[] function.[58] Global variables are visible to main[] and every other function in the source code.
On the other hand, variable declarations inside of main[], other functions, or within { } block delimiters are local variables. Local variables also include formal parameter variables. Parameter variables are enclosed within the parenthesis of function definitions.[59] They provide an interface to the function.
  • Local variables declared using the static prefix are also stored in the global and static data region.[57] Unlike global variables, static variables are only visible within the function or block. Static variables always retain their value. An example usage would be the function int increment_counter[]{ static int counter = 0; counter++; return counter;}
  • The stack region is a contiguous block of memory located near the top memory address.[60] Variables placed in the stack are populated from top to bottom [not bottom to top].[60] A stack pointer is a special-purpose register that keeps track of the last memory address populated.[60] Variables are placed into the stack via the assembly language PUSH instruction. Therefore, the addresses of these variables are set during runtime. The method for stack variables to lose their scope is via the POP instruction.
  • Local variables declared without the static prefix, including formal parameter variables,[61] are called automatic variables[58] and are stored in the stack.[57] They are visible inside the function or block and lose their scope upon exiting the function or block.
  • The heap region is located below the stack.[57] It is populated from the bottom to the top. The operating system manages the heap using a heap pointer and a list of allocated memory blocks.[62] Like the stack, the addresses of heap variables are set during runtime. An out of memory error occurs when the heap pointer and the stack pointer meet.
  • C provides the malloc[] library function to allocate heap memory.[63] Populating the heap with data is an additional copy function. Variables stored in the heap are economically passed to functions using pointers. Without pointers, the entire block of data would have to be passed to the function via the stack.

C++[edit]

In the 1970s, software engineers needed language support to break large projects down into modules.[64] One obvious feature was to decompose large projects physically into separate files. A less obvious feature was to decompose large projects logically into abstract datatypes.[64] At the time, languages supported concrete [scalar] datatypes like integer numbers, floating-point numbers, and strings of characters. Concrete datatypes have their representation as part of their name.[65] Abstract datatypes are structures of concrete datatypes, with a new name assigned. For example, a list of integers could be called integer_list.

In object-oriented jargon, abstract datatypes are called classes. However, a class is only a definition; no memory is allocated. When memory is allocated to a class, it's called an object.[66]

Object-oriented imperative languages developed by combining the need for classes and the need for safe functional programming.[67] A function, in an object-oriented language, is assigned to a class. An assigned function is then referred to as a method, member function, or operation. Object-oriented programming is executing operations on objects.[68]

Object-oriented languages support a syntax to model subset/superset relationships. In set theory, an element of a subset inherits all the attributes contained in the superset. For example, a student is a person. Therefore, the set of students is a subset of the set of persons. As a result, students inherit all the attributes common to all persons. Additionally, students have unique attributes that other persons don't have. Object-oriented languages model subset/superset relationships using inheritance.[69] Object-oriented programming became the dominant language paradigm by the late 1990s.[64]

C++ [1985] was originally called "C with Classes."[70] It was designed to expand C's capabilities by adding the object-oriented facilities of the language Simula.[71]

An object-oriented module is composed of two files. The definitions file is called the header file. Here is a C++ header file for the GRADE class in a simple school application:

// grade.h
// -------

// Used to allow multiple source files to include
// this header file without duplication errors.
// ----------------------------------------------
#ifndef GRADE_H
#define GRADE_H

class GRADE {
public:
    // This is the constructor operation.
    // ----------------------------------
    GRADE [ const char letter ];

    // This is a class variable.
    // -------------------------
    char letter;

    // This is a member operation.
    // ---------------------------
    int grade_numeric[ const char letter ];

    // This is a class variable.
    // -------------------------
    int numeric;
};
#endif

A constructor operation is a function with the same name as the class name.[72] It is executed when the calling operation executes the new statement.

A module's other file is the source file. Here is a C++ source file for the GRADE class in a simple school application:

// grade.cpp
// ---------
#include "grade.h"

GRADE::GRADE[ const char letter ]
{
    // Reference the object using the keyword 'this'.
    // ----------------------------------------------
    this->letter = letter;

    // This is Temporal Cohesion
    // -------------------------
    this->numeric = grade_numeric[ letter ];
}

int GRADE::grade_numeric[ const char letter ]
{
    if [ [ letter == 'A' || letter == 'a' ] ]
        return 4;
    else
    if [ [ letter == 'B' || letter == 'b' ] ]
        return 3;
    else
    if [ [ letter == 'C' || letter == 'c' ] ]
        return 2;
    else
    if [ [ letter == 'D' || letter == 'd' ] ]
        return 1;
    else
    if [ [ letter == 'F' || letter == 'f' ] ]
        return 0;
    else
        return -1;
}

Here is a C++ header file for the PERSON class in a simple school application:

// person.h
// --------
#ifndef PERSON_H
#define PERSON_H

class PERSON {
public:
    PERSON [ const char *name ];
    const char *name;
};
#endif

Here is a C++ source file for the PERSON class in a simple school application:

// person.cpp
// ----------
#include "person.h"

PERSON::PERSON [ const char *name ]
{
    this->name = name;
}

Here is a C++ header file for the STUDENT class in a simple school application:

// student.h
// ---------
#ifndef STUDENT_H
#define STUDENT_H

#include "person.h"
#include "grade.h"

// A STUDENT is a subset of PERSON.
// --------------------------------
class STUDENT : public PERSON{
public:
    STUDENT [ const char *name ];
    GRADE *grade;
};
#endif

Here is a C++ source file for the STUDENT class in a simple school application:

// student.cpp
// -----------
#include "student.h"
#include "person.h"

STUDENT::STUDENT [ const char *name ]:
    // Execute the constructor of the PERSON superclass.
    // -------------------------------------------------
    PERSON[ name ]
{
    // Nothing else to do.
    // -------------------
}

Here is a driver program for demonstration:

// student_dvr.cpp
// ---------------
#include 
#include "student.h"

int main[ void ]
{
    STUDENT *student = new STUDENT[ "The Student" ];
    student->grade = new GRADE[ 'a' ];

    std::cout 
        // Notice student inherits PERSON's name
        

Chủ Đề