COM 3620: Parallel Computing (formally called: Parallel Architecture and Algorithms) TIME: 11:45 - 1:15, Tuesday and Wednesday, Spring Quarter (Classes for this course will begin the week of March 31. Note that we will meet for 1-1/2 hours on Tuesday and Wed. We will not meet on Friday, as described in the Registrar's schedule.) INSTRUCTOR: Prof. Cooperman (gene@ccs.neu.edu) PROJECTS: http://www.ccs.neu.edu/home/gene/projects.html OFFICE HOURS: 1:30 - 2:30, Tues, Wed (after class); and by appointment If such a systems course interests you, I urge you to enroll for this Spring! The College of Computer Science will offer very few systems courses under the semester system. These include Parallel Computing, Computer Architecture, and the required core course (Intensive Systems). Since parallel computing was also taught last year, it will _not_ be offered in 2003-2004. I include the course description and tentative syllabus below. I also include a summary of some of the projects you can choose from at: http://www.ccs.neu.edu/home/gene/projects.html You may also propose your own project. Parallel Computing will be taught as a practice-oriented systems course, with a project requirement that makes strong use of systems-level programming. The "mid-term" and "final" will be your project reports, to be presented both orally and in written form. You will have a choice of building software on top of one or more of Corba, XML/SOAP technologies, POSIX threads, TCP/IP services, the Computational Grid protocols, MPI (Message Passing Interface), openMP (Open MultiProcessing), and/or other "middleware systems". I will introduce these technologies during the course in the context of Parallel Computing. The prerequisites are a general sophistication in UNIX programming: the ability to take a system call "spec" from a man page (e.g. section 2 or section 3 of UNIX man pages), and correctly apply the system call. Please note that the course will begin on the week of March 31. ===================================================================== TENTATIVE SYLLABUS: COM 3620 Parallel Architecture and Algorithms Prerequisites: general sophistication in UNIX programming NOTE: Please ignore the catalog description for COM 3620. It dates from circa 1990, and has nothing to do with the modern world. Parallel computing today is dominated by commodity hardware and the use of standardized protocols and system services. It is related to distributed computing, with the following important difference: parallel computing assumes that the CPU is the bottleneck, while distributed computing assumes that the network (bandwidth and/or latency) is the bottleneck. The emphasis will be on understanding the many middleware technologies and adapting them to parallel computing. The course will include a project requiring use of one or more of the middleware technologies. TOPIC 1: Brief Introduction to Parallel Computing via TOP-C TOPIC 2: Hardware Interface: POSIX Threads (shared memory), TCP/IP Sockets (distributed memory), and DSM (distributed shared memory): cache coherence, bus snooping, synchronization, TCP/IP parameters, and other topics TOPIC 3: Algorithmic Concepts: parallel prefix, pointer jumping, PRAM and bridging models of parallelism TOPIC 4: Overview of Middleware for Distributed and Parallel Computing: Corba, XML/SOAP technologies, the Computational Grid protocols, MPI (Message Passing Interface), POSIX threads, TCP/IP services, parallel BLAS (parallel basic linear algebra subroutines), and other "middleware systems". TOPIC 5: Programmer's Models of Parallelism: Linda (shared tasks), Cilk (work-stealing model of parallelism), TOP-C (Task Oriented Parallelism), OpenMP (Open MultiProcessing: shared memory parallelism), HPF (High Performance Fortran: data parallelism) TOPIC 6: Applications of Parallelism