Sponsored Links
-->

Tuesday, May 29, 2018

Tech Times Samples Electronic Music Instruments - YouTube
src: i.ytimg.com

An electronic musical instrument is a musical instrument that produces sound using electronic circuitry. Such an instrument sounds by outputting an electrical, electronic or digital audio signal that ultimately is plugged into a power amplifier which drives a loudspeaker, creating the sound heard by the performer and listener.

An electronic instrument might include a user interface for controlling its sound, often by adjusting the pitch, frequency, or duration of each note. A common user interface is the musical keyboard, which functions similarly to the keyboard on an acoustic piano, except that with an electronic keyboard, the keyboard itself does not make any sound. An electronic keyboard sends a signal to a synth module, computer or other electronic or digital sound generator, which then creates a sound. However, it is increasingly common to separate user interface and sound-generating functions into a music controller (input device) and a music synthesizer, respectively, with the two devices communicating through a musical performance description language such as MIDI or Open Sound Control.

All electronic musical instruments can be viewed as a subset of audio signal processing applications. Simple electronic musical instruments are sometimes called sound effects; the border between sound effects and actual musical instruments is often unclear.

In the 2010s, electronic musical instruments are now widely used in most styles of music. In popular music styles such as electronic dance music, almost all of the instrument sounds used in recordings are electronic instruments (e.g., bass synth, synthesizer, drum machine). Development of new electronic musical instruments, controllers, and synthesizers continues to be a highly active and interdisciplinary field of research. Specialized conferences, notably the International Conference on New Interfaces for Musical Expression, have organized to report cutting-edge work, as well as to provide a showcase for artists who perform or create music with new electronic music instruments, controllers, and synthesizers.


Video Electronic musical instrument



Early examples

In the 18th-century, musicians and composers adapted a number of acoustic instruments to exploit the novelty of electricity. Thus, in the broadest sense, the first electrified musical instrument was the Denis d'or keyboard, dating from 1753, followed shortly by the clavecin électrique by the Frenchman Jean-Baptiste de Laborde in 1761. The Denis d'or consisted of a keyboard instrument of over 700 strings, electrified temporarily to enhance sonic qualities. The clavecin électrique was a keyboard instrument with plectra (picks) activated electrically. However, neither instrument used electricity as a sound-source.

The first electric synthesizer was invented in 1876 by Elisha Gray. The "Musical Telegraph" was a chance by-product of his telephone technology when Gray accidentally discovered that he could control sound from a self-vibrating electromagnetic circuit and so invented a basic oscillator. The Musical Telegraph used steel reeds oscillated by electromagnets and transmitted over a telephone line. Gray also built a simple loudspeaker device into later models, which consisted of a diaphragm vibrating in a magnetic field.

A significant invention, which later had a profound effect on electronic music, was the audion in 1906. This was the first thermionic valve, or vacuum tube and which led to the generation and amplification of electrical signals, radio broadcasting, and electronic computation, among other things. Other early synthesizers included the Telharmonium (1897), the Theremin (1919), Jörg Mager's Spharophon (1924) and Partiturophone, Taubmann's similar Electronde (1933), Maurice Martenot's ondes Martenot ("Martenot waves", 1928), Trautwein's Trautonium (1930). The Mellertion (1933) used a non-standard scale, Bertrand's Dynaphone could produce octaves and perfect fifths, while the Emicon was an American, keyboard-controlled instrument constructed in 1930 and the German Hellertion combined four instruments to produce chords. Three Russian instruments also appeared, Oubouhof's Croix Sonore (1934), Ivor Darreg's microtonal 'Electronic Keyboard Oboe' (1937) and the ANS synthesizer, constructed by the Russian scientist Evgeny Murzin from 1937 to 1958. Only two models of this latter were built and the only surviving example is currently stored at the Lomonosov University in Moscow. It has been used in many Russian movies--like Solaris--to produce unusual, "cosmic" sounds.

Hugh Le Caine, John Hanert, Raymond Scott, composer Percy Grainger (with Burnett Cross), and others built a variety of automated electronic-music controllers during the late 1940s and 1950s. In 1959 Daphne Oram produced a novel method of synthesis, her "Oramics" technique, driven by drawings on a 35 mm film strip; it was used for a number of years at the BBC Radiophonic Workshop. This workshop was also responsible for the theme to the TV series Doctor Who, a piece, largely created by Delia Derbyshire, that more than any other ensured the popularity of electronic music in the UK.

Telharmonium

In 1897 Thaddeus Cahill patented an instrument called the Telharmonium (or Teleharmonium, also known as the Dynamaphone). Using tonewheels to generate musical sounds as electrical signals by additive synthesis, it was capable of producing any combination of notes and overtones, at any dynamic level. This technology was later used to design the Hammond organ. Between 1901 and 1910 Cahill had three progressively larger and more complex versions made, the first weighing seven tons, the last in excess of 200 tons. Portability was managed only by rail and with the use of thirty boxcars. By 1912, public interest had waned, and Cahill's enterprise was bankrupt.

Theremin

Another development, which aroused the interest of many composers, occurred in 1919-1920. In Leningrad, Leon Theremin (actually Lev Termen) built and demonstrated his Etherophone, which was later renamed the Theremin. This led to the first compositions for electronic instruments, as opposed to noisemakers and re-purposed machines. The Theremin was notable for being the first musical instrument played without touching it. In 1929, Joseph Schillinger composed First Airphonic Suite for Theremin and Orchestra, premièred with the Cleveland Orchestra with Leon Theremin as soloist. The next year Henry Cowell commissioned Theremin to create the first electronic rhythm machine, called the Rhythmicon. Cowell wrote some compositions for it, and he and Schillinger premiered it in 1932.

Ondes Martenot

The 1920s have been called the apex of the Mechanical Age and the dawning of the Electrical Age. In 1922, in Paris, Darius Milhaud began experiments with "vocal transformation by phonograph speed change." These continued until 1927. This decade brought a wealth of early electronic instruments--along with the Theremin, there is the presentation of the Ondes Martenot, which was designed to reproduce the microtonal sounds found in Hindu music, and the Trautonium. Maurice Martenot invented the Ondes Martenot in 1928, and soon demonstrated it in Paris. Composers using the instrument ultimately include Boulez, Honegger, Jolivet, Koechlin, Messiaen, Milhaud, Tremblay, and Varèse. Radiohead guitarist and multi-instrumentalist Jonny Greenwood also uses it in his compositions and a plethora of Radiohead songs. In 1937, Messiaen wrote Fête des belles eaux for 6 ondes Martenot, and wrote solo parts for it in Trois petites Liturgies de la Présence Divine (1943-44) and the Turangalîla-Symphonie (1946-48/90).

Trautonium

The Trautonium was invented in 1928. It was based on the subharmonic scale, and the resulting sounds were often used to emulate bell or gong sounds, as in the 1950s Bayreuth productions of Parsifal. In 1942, Richard Strauss used it for the bell- and gong-part in the Dresden première of his Japanese Festival Music. This new class of instruments, microtonal by nature, was only adopted slowly by composers at first, but by the early 1930s there was a burst of new works incorporating these and other electronic instruments.

Hammond organ and Novachord

In 1929 Laurens Hammond established his company for the manufacture of electronic instruments. He went on to produce the Hammond organ, which was based on the principles of the Telharmonium, along with other developments including early reverberation units. The Hammond organ is an electromechanical instrument, as it used both mechanical elements and electronic parts. A Hammond organ used spinning metal tonewheels to produce different sounds. A magnetic pickup similar in design to the pickups in an electric guitar is used to transmit the pitches in the tonewheels to an amplifier and speaker enclosure. While the Hammond organ was designed to be a lower-cost alternative to a pipe organ for church music, musicians soon discovered that the Hammond was an excellent instrument for blues and jazz; indeed, an entire genre of music developed built around this instrument, known as the organ trio (typically Hammond organ, drums, and a third instrument, either saxophone or guitar).

The first commercially manufactured synthesizer was the Novachord, built by the Hammond Organ Company from 1938 to 1942, which offered 72-note polyphony using 12 oscillators driving monostable-based divide-down circuits, basic envelope control and resonant low-pass filters. The instrument featured 163 vacuum tubes and weighed 500 pounds. The instrument's use of envelope control is significant, since this is perhaps the most significant distinction between the modern synthesizer and other electronic instruments.


Maps Electronic musical instrument



Analogue synthesis 1950-1980

The most commonly used electronic instruments are synthesizers, so-called because they artificially generate sound using a variety of techniques. All early circuit-based synthesis involved the use of analogue circuitry, particularly voltage controlled amplifiers, oscillators and filters. An important technological development was the invention of the Clavivox synthesizer in 1956 by Raymond Scott with subassembly by Robert Moog. French composer and engineer Edgard Varèse created a variety of compositions using electronic horns, whistles, and tape. Most notably, he wrote Poème électronique for the Phillips pavilion at the Brussels World Fair in 1958.

Modular synthesizers

RCA produced experimental devices to synthesize voice and music in the 1950s. The Mark II Music Synthesizer, housed at the Columbia-Princeton Electronic Music Center in New York City. Designed by Herbert Belar and Harry Olson at RCA, with contributions from Vladimir Ussachevsky and Peter Mauzey, it was installed at Columbia University in 1957. Consisting of a room-sized array of interconnected sound synthesis components, it was only capable of producing music by programming, using a paper tape sequencer punched with holes to control pitch sources and filters, similar to a mechanical player piano but capable of generating a wide variety of sounds. The vacuum tube system had to be patched to create timbres.

In the 1960s synthesizers were still usually confined to studios due to their size. They were usually modular in design, their stand-alone signal sources and processors connected with patch cords or by other means and controlled by a common controlling device. Harald Bode, Don Buchla, Hugh Le Caine, Raymond Scott and Paul Ketoff were among the first to build such instruments, in the late 1950s and early 1960s. Buchla later produced a commercial modular synthesizer, the Buchla Music Easel. Robert Moog, who had been a student of Peter Mauzey and one of the RCA Mark II engineers, created a synthesizer that could reasonably be used by musicians, designing the circuits while he was at Columbia-Princeton. The Moog synthesizer was first displayed at the Audio Engineering Society convention in 1964. It required experience to set up sounds but was smaller and more intuitive than what had come before, less like a machine and more like a musical instrument. Moog established standards for control interfacing, using a logarithmic 1-volt-per-octave for pitch control and a separate triggering signal. This standardization allowed synthesizers from different manufacturers to operate simultaneously. Pitch control was usually performed either with an organ-style keyboard or a music sequencer producing a timed series of control voltages. During the late 1960s hundreds of popular recordings used Moog synthesizers. Other early commercial synthesizer manufacturers included ARP, who also started with modular synthesizers before producing all-in-one instruments, and British firm EMS.

Integrated synthesizers

In 1970, Moog designed the Minimoog, a non-modular synthesizer with a built-in keyboard. The analogue circuits were interconnected with switches in a simplified arrangement called "normalization." Though less flexible than a modular design, normalization made the instrument more portable and easier to use. The Minimoog sold 12,000 units. further standardized the design of subsequent synthesizers with its integrated keyboard, pitch and modulation wheels and VCO->VCF->VCA signal flow. It has become celebrated for its "fat" sound--and its tuning problems. Miniaturized solid-state components allowed synthesizers to become self-contained, portable instruments that soon appeared in live performance and quickly became widely used in popular music and electronic art music.

Polyphony

Many early analog synthesizers were monophonic, producing only one tone at a time. Popular monophonic synthesizers include the Moog Minimoog. A few, such as the Moog Sonic Six, ARP Odyssey and EML 101, could produce two different pitches at a time when two keys were pressed. Polyphony (multiple simultaneous tones, which enables chords) was only obtainable with electronic organ designs at first. Popular electronic keyboards combining organ circuits with synthesizer processing included the ARP Omni and Moog's Polymoog and Opus 3.

By 1976 affordable polyphonic synthesizers began to appear, notably the Yamaha CS-50, CS-60 and CS-80, the Sequential Circuits Prophet-5 and the Oberheim Four-Voice. These remained complex, heavy and relatively costly. The recording of settings in digital memory allowed storage and recall of sounds. The first practical polyphonic synth, and the first to use a microprocessor as a controller, was the Sequential Circuits Prophet-5 introduced in late 1977. For the first time, musicians had a practical polyphonic synthesizer that could save all knob settings in computer memory and recall them at the touch of a button. The Prophet-5's design paradigm became a new standard, slowly pushing out more complex and recondite modular designs.


Electronic Musical Instruments Market | TechnoPOW
src: technopow.com


Tape recording

In 1935, another significant development was made in Germany. Allgemeine Elektrizitäts Gesellschaft (AEG) demonstrated the first commercially produced magnetic tape recorder, called the Magnetophon. Audio tape, which had the advantage of being fairly light as well as having good audio fidelity, ultimately replaced the bulkier wire recorders.

The term "electronic music" (which first came into use during the 1930s) came to include the tape recorder as an essential element: "electronically produced sounds recorded on tape and arranged by the composer to form a musical composition" It was also indispensable to Musique concrète.

Tape also gave rise to the first, analogue, sample-playback keyboards, the Chamberlin and its more famous successor the Mellotron, an electro-mechanical, polyphonic keyboard originally developed and built in Birmingham, England in the early 1960s.


Musical Instruments Online : Buy Musical Instruments in India ...
src: images-eu.ssl-images-amazon.com


Sound sequencer

During the 1940s-1960s, Raymond Scott, an American composer of electronic music, invented various kind of music sequencers for his electric compositions. Step sequencers played rigid patterns of notes using a grid of (usually) 16 buttons, or steps, each step being 1/16 of a measure. These patterns of notes were then chained together to form longer compositions. Software sequencers were continuously utilized since the 1950s in the context of computer music, including computer-played music (software sequencer), computer-composed music (music synthesis), and computer sound generation (sound synthesis).


A Photo Of Old Used Synthesizer, Electronic Musical Keyboard Or ...
src: thumbs.dreamstime.com


Digital era 1980-2000

Digital synthesis

The first digital synthesizers were academic experiments in sound synthesis using digital computers. FM synthesis was developed for this purpose; as a way of generating complex sounds digitally with the smallest number of computational operations per sound sample. In 1983 Yamaha introduced the first stand-alone digital synthesizer, the DX-7. It used frequency modulation synthesis (FM synthesis), first developed by John Chowning at Stanford University during the late sixties. Chowning exclusively licensed his FM synthesis patent to Yamaha in 1975. Yamaha subsequently released their first FM synthesizers, the GS-1 and GS-2, which were costly and heavy. There followed a pair of smaller, preset versions, the CE20 and CE25 Combo Ensembles, targeted primarily at the home organ market and featuring four-octave keyboards. Yamaha's third generation of digital synthesizers was a commercial success; it consisted of the DX7 and DX9 (1983). Both models were compact, reasonably priced, and dependent on custom digital integrated circuits to produce FM tonalities. The DX7 was the first mass market all-digital synthesizer. It became indispensable to many music artists of the 1980s, and demand soon exceeded supply. The DX7 sold over 200,000 units within three years.

The DX series was not easy to program but offered a detailed, percussive sound that led to the demise of the electro-mechanical Rhodes piano, which was heavier and larger than a DX synth. Following the success of FM synthesis Yamaha signed a contract with Stanford University in 1989 to develop digital waveguide synthesis, leading to the first commercial physical modeling synthesizer, Yamaha's VL-1, in 1994. The DX-7 was affordable enough for amateurs and young bands to buy, unlike the costly synthesizers of previous generations, which were mainly used by top professionals.

Sampling

The Fairlight CMI (Computer Musical Instrument), the first polyphonic digital sampler, was the harbinger of sample-based synthesizers. Designed in 1978 by Peter Vogel and Kim Ryrie and based on a dual microprocessor computer designed by Tony Furse in Sydney, Australia, the Fairlight CMI gave musicians the ability to modify volume, attack, decay, and use special effects like vibrato. Sample waveforms could be displayed on-screen and modified using a light pen. The Synclavier from New England Digital was a similar system. Jon Appleton (with Jones and Alonso) invented the Dartmouth Digital Synthesizer, later to become the New England Digital Corp's Synclavier. The Kurzweil K250, first produced in 1983, was also a successful polyphonic digital music synthesizer, noted for its ability to reproduce several instruments synchronously and having a velocity-sensitive keyboard.

Computer music

An important new development was the advent of computers for the purpose of composing music, as opposed to manipulating or creating sounds. Iannis Xenakis began what is called musique stochastique, or stochastic music, which is a method of composing that employs mathematical probability systems. Different probability algorithms were used to create a piece under a set of parameters. Xenakis used graph paper and a ruler to aid in calculating the velocity trajectories of glissandi for his orchestral composition Metastasis (1953-54), but later turned to the use of computers to compose pieces like ST/4 for string quartet and ST/48 for orchestra (both 1962).

The impact of computers continued in 1956. Lejaren Hiller and Leonard Issacson composed Illiac Suite for string quartet, the first complete work of computer-assisted composition using algorithmic composition.

In 1957, Max Mathews at Bell Lab wrote MUSIC-N series, a first computer program family for generating digital audio waveforms through direct synthesis. Then Barry Vercoe wrote MUSIC 11 based on MUSIC IV-BF, a next-generation music synthesis program (later evolving into csound, which is still widely used).

In mid 80s, Miller Puckette at IRCAM developed graphic signal-processing software for 4X called Max (after Max Mathews), and later ported it to Macintosh (with Dave Zicarelli extending it for Opcode ) for real-time MIDI control, bringing algorithmic composition availability to most composers with modest computer programming background.

MIDI

In 1980, a group of musicians and music merchants met to standardize an interface by which new instruments could communicate control instructions with other instruments and the prevalent microcomputer. This standard was dubbed MIDI (Musical Instrument Digital Interface). A paper was authored by Dave Smith of Sequential Circuits and proposed to the Audio Engineering Society in 1981. Then, in August 1983, the MIDI Specification 1.0 was finalized.

The advent of MIDI technology allows a single keystroke, control wheel motion, pedal movement, or command from a microcomputer to activate every device in the studio remotely and in synchrony, with each device responding according to conditions predetermined by the composer.

MIDI instruments and software made powerful control of sophisticated instruments easily affordable by many studios and individuals. Acoustic sounds became reintegrated into studios via sampling and sampled-ROM-based instruments.


Amazon.com: Plixio 61 Key Electric Music Keyboard Piano with USB ...
src: images-na.ssl-images-amazon.com


Modern electronic musical instruments

The increasing power and decreasing cost of sound-generating electronics (and especially of the personal computer), combined with the standardization of the MIDI and Open Sound Control musical performance description languages, has facilitated the separation of musical instruments into music controllers and music synthesizers.

By far the most common musical controller is the musical keyboard. Other controllers include the radiodrum, Akai's EWI and Yamah's WX wind controllers, the guitar-like SynthAxe, the BodySynth, the Buchla Thunder, the Continuum Fingerboard, the Roland Octapad, various isomorphic keyboards including the Thummer, and Kaossilator Pro, and kits like I-CubeX.

Reactable

The Reactable is a round translucent table with a backlit interactive display. By placing and manipulating blocks called tangibles on the table surface, while interacting with the visual display via finger gestures, a virtual modular synthesizer is operated, creating music or sound effects.

Percussa AudioCubes

AudioCubes are autonomous wireless cubes powered by an internal computer system and rechargeable battery. They have internal RGB lighting, and are capable of detecting each other's location, orientation and distance. The cubes can also detect distances to the user's hands and fingers. Through interaction with the cubes, a variety of music and sound software can be operated. AudioCubes have applications in sound design, music production, DJing and live performance.

Kaossilator

The Kaossilator and Kaossilator Pro are compact instruments where the position of a finger on the touch pad controls two note-characteristics; usually the pitch is changed with a left-right motion and the tonal property, filter or other parameter changes with an up-down motion. The touch pad can be set to different musical scales and keys. The instrument can record a repeating loop of adjustable length, set to any tempo, and new loops of sound can be layered on top of existing ones. This lends itself to electronic dance-music but is more limited for controlled sequences of notes, as the pad on a regular Kaossilator is featureless.

Eigenharp

The Eigenharp is a large instrument resembling a bassoon, which can be interacted with through big buttons, a drum sequencer and a mouthpiece. The sound processing is done on a separate computer.

XTH Sense

The XTH Sense is a wearable instrument that uses muscle sounds from the human body (known as mechanomyogram) to make music and sound effects. As a performer moves, the body produces muscle sounds that are captured by a chip microphone worn on arm or legs. The muscle sounds are then live sampled using a dedicated software program and a library of modular audio effects. The performer controls the live sampling parameters by weighing force, speed and articulation of the movement.

AlphaSphere

The AlphaSphere is a spherical instrument that consists of 48 tactile pads that respond to pressure as well as touch. Custom software allows the pads to be indefinitely programmed individually or by groups in terms of function, note, and pressure parameter among many other settings. The primary concept of the AlphaSphere is to increase the level of expression available to electronic musicians, by allowing for the playing style of a musical instrument.


Musical Instruments Archives - AtomicSpaceJunk.com
src: atomicspacejunk.com


Chip music

Chiptune, chipmusic, or chip music is music written in sound formats where many of the sound textures are synthesized or sequenced in real time by a computer or video game console sound chip, sometimes including sample-based synthesis and low bit sample playback. Many chip music devices featured synthesizers in tandem with low rate sample playback.


Casio Releases A Free Music App That Makes Learning To Play The ...
src: www.casio.com


DIY culture

During the late 1970s and early 1980s, DIY (Do it yourself) designs were published in hobby electronics magazines (notably the Formant modular synth, a DIY clone of the Moog system, published by Elektor) and kits were supplied by companies such as Paia in the US, and Maplin Electronics in the UK.

Circuit bending

In 1966, Reed Ghazala discovered and began to teach math "circuit bending"--the application of the creative short circuit, a process of chance short-circuiting, creating experimental electronic instruments, exploring sonic elements mainly of timbre and with less regard to pitch or rhythm, and influenced by John Cage's aleatoric music concept.

Much of this manipulation of circuits directly, especially to the point of destruction, was pioneered by Louis and Bebe Barron in the early 1950s, such as their work with John Cage on the Williams Mix and especially in the soundtrack to Forbidden Planet.

Modern circuit bending is the creative customization of the circuits within electronic devices such as low voltage, battery-powered guitar effects, children's toys and small digital synthesizers to create new musical or visual instruments and sound generators. Emphasizing spontaneity and randomness, the techniques of circuit bending have been commonly associated with noise music, though many more conventional contemporary musicians and musical groups have been known to experiment with "bent" instruments. Circuit bending usually involves dismantling the machine and adding components such as switches and potentiometers that alter the circuit. With the revived interest for analogue synthesizer circuit bending became a cheap solution for many experimental musicians to create their own individual analogue sound generators. Nowadays many schematics can be found to build noise generators such as the Atari Punk Console or the Dub Siren as well as simple modifications for children toys such as the famous Speak & Spells that are often modified by circuit benders.

Modular synthesizers

The modular synthesizer is a type of synthesizer consisting of separate interchangeable modules. These are also available as kits for hobbyist DIY constructors. Many hobbyist designers also make available bare PCB boards and front panels for sale to other hobbyists.


Amazon.com: The ONE Smart Piano Keyboard, 61-Key Portable Keyboard ...
src: images-na.ssl-images-amazon.com


2010s

According to a forum post in December 2010, Sixense Entertainment is working on musical control with the Sixense TrueMotion motion controller. Immersive virtual musical instruments, or immersive virtual instruments for music and sound aim to represent musical events and sound parameters in a virtual reality so that they can be perceived not only through auditory feedback but also visually in 3D and possibly through tactile as well as haptic feedback, allowing the development of novel interaction metaphors beyond manipulation such as prehension.


Top 10 Best Musical Instrument Keyboards 2017 â€
src: www.topvaluereviews.net


See also


Casio SA-47A Electronic Keyboard, Black: Amazon.in: Musical ...
src: images-na.ssl-images-amazon.com


References


AIR Music Technology - German-engineered virtual instruments ...
src: 96b0c7ee96e15a318d20-2bf75360321f1a0597a021ce2af6519b.r95.cf2.rackcdn.com


External links

  • 120 Years of Electronic Music
  • A chronology of computer and electronic music (including instruments)
  • History of Electronic Music (French)
  • Tons of Tones !! : Site with technical data on Electronic Modelling of Musical Tones

DIY

  • DIY Hardware and Software Discussion forum at Electro-music.com
  • The Synth-DIY email list
  • Ken Stone's Do-It-Yourself Page
  • Music From Outer Space Information and parts to self-build a synthesizer.
  • SDIY wiki a wiki about DIY electronic musical instruments

Source of article : Wikipedia