Senin, 16 Juni 2014

PARALLEL COMPUTATION



Nama : Tera Nurul Harfiah
Kelas : 4ia10
NPM : 56410863

PARALLEL COMPUTATION
a.     Parallelism Concept
NCAR's production supercomputers are clusters of symmetric multiprocessor (SMP) nodes. While these systems can run single processes on a single processor (CPU), high-performance computing requires having many CPUs work in parallel so compute-intensive programs can run to completion in a reasonable amount of wall-clock time. Clusters such as Geyser and Caldera include both CPUs and GPUs.
Unless you have hands-on experience with multiprocessor cluster systems, you may need to learn some new techniques before you can run parallel programs efficiently and economically. This introduction to parallel computing concepts will help prepare you to run your programs successfully on our systems. You may also want to review the reference materials identified at the end of this overview.
b.     Distributed Processing
The first term used to describe the distribution of multiple computers throughout an organization in contrast to a centralized system. It started with the first minicomputers. Today, distributed processing is called "distributed computing." See also client/server.
c.      Architectural Parallel Computer
von Neumann Architecture
  • Named after the Hungarian mathematician John von Neumann who first authored the general requirements for an electronic computer in his 1945 papers.
  • Since then, virtually all computers have followed this basic design, differing from earlier computers which were programmed through "hard wiring".
Description: von Neumann architecture
  • Comprised of four main components:
    • Memory
    • Control Unit
    • Arithmetic Logic Unit
    • Input/Output
  • Read/write, random access memory is used to store both program instructions and data
    • Program instructions are coded data which tell the computer to do something
    • Data is simply information to be used by the program
  • Control unit fetches instructions/data from memory, decodes the instructions and then sequentially coordinates operations to accomplish the programmed task.
  • Aritmetic Unit performs basic arithmetic operations
  • Input/Output is the interface to the human operator
  • So what? Who cares?
    • Well, parallel computers still follow this basic design, just multiplied in units. The basic, fundamental architecture remains the same.

d.     Pengantar Thread Programming
In computer science, a thread of execution is the smallest sequence of programmed instructions that can be managed independently by an operating system scheduler. The scheduler itself is a light-weight process. The implementation of threads and processes differs from one operating system to another, but in most cases, a thread is contained inside a process. Multiple threads can exist within the same process and share resources such as memory, while different processes do not share these resources. In particular, the threads of a process share the latter's instructions (its code) and its context (the values that its variables reference at any given moment).
On a single processor, multithreading is generally implemented by time-division multiplexing (as in multitasking): the processor switches between different threads. This context switching generally happens frequently enough that the user perceives the threads or tasks as running at the same time. On a multiprocessor or multi-core system, threads can be truly concurrent, with every processor or core executing a separate thread simultaneously.
Many modern operating systems directly support both time-sliced and multiprocessor threading with a process scheduler. The kernel of an operating system allows programmers to manipulate threads via the system call interface. Some implementations are called a kernel thread, whereas a lightweight process (LWP) is a specific type of kernel thread that shares the same state and information.
Programs can have user-space threads when threading with timers, signals, or other methods to interrupt their own execution, performing a sort of ad hoc time-slicing.
http://en.wikipedia.org/wiki/Thread_%28computing%29



f. Pengantar Pemrograman CUDA GPU
CUDA (Compute Unified Device Architecture) is a parallel computing platform and programming model created by NVIDIA and implemented by the graphics processing units (GPUs) that they produce. CUDA gives program developers direct access to the virtual instruction set and memory of the parallel computational elements in CUDA GPUs.
Using CUDA, the GPUs can be used for general purpose processing (i.e., not exclusively graphics); this approach is known as GPGPU. Unlike CPUs, however, GPUs have a parallel throughput architecture that emphasizes executing many concurrent threads slowly, rather than executing a single thread very quickly.
The CUDA platform is accessible to software developers through CUDA-accelerated libraries, compiler directives (such as OpenACC), and extensions to industry-standard programming languages, including C, C++ and Fortran. C/C++ programmers use 'CUDA C/C++', compiled with "nvcc", NVIDIA's LLVM-based C/C++ compiler. and Fortran programmers can use 'CUDA Fortran', compiled with the PGI CUDA Fortran compiler from The Portland Group.
In addition to libraries, compiler directives, CUDA C/C++ and CUDA Fortran, the CUDA platform supports other computational interfaces, including the Khronos Group's OpenCL, Microsoft's DirectCompute, and C++ AMP. Third party wrappers are also available for Python, Perl, Fortran, Java, Ruby, Lua, Haskell, MATLAB, IDL, and native support in Mathematica.
In the computer game industry, GPUs are used not only for graphics rendering but also in game physics calculations (physical effects like debris, smoke, fire, fluids); examples include PhysX and Bullet. CUDA has also been used to accelerate non-graphical applications in computational biology, cryptography and other fields by an order of magnitude or more.
CUDA provides both a low level API and a higher level API. The initial CUDA SDK was made public on 15 February 2007, for Microsoft Windows and Linux. Mac OS X support was later added in version 2.0, which supersedes the beta released February 14, 2008. CUDA works with all Nvidia GPUs from the G8x series onwards, including GeForce, Quadro and the Tesla line. CUDA is compatible with most standard operating systems. Nvidia states that programs developed for the G8x series will also work without modification on all future Nvidia video cards, due to binary compatibility.
http://en.wikipedia.org/wiki/CUDA

Tidak ada komentar:

Posting Komentar