OS: Linux Kernel

From OnnoWiki
Jump to navigation Jump to search

Sebuah kernel UNIX adalah interpreter yang menerjemahkan panggilan system call tingkat pengguna akses yang disinkronisasi ke hardware dan device driver. Panggilan didefinisikan oleh standar POSIX.

Interpreter mengelola permintaan system call tingkat pengguna untuk mengakses file atau proses. Kernel scheduler memungkinkan UNIX untuk mengatur subsistem file dan proses. Subsistem file bekerja dengan perangkat device raw atau block device. Subsistem Proses mengelola sinkronisasi process, komunikasi antar-proses, manajemen memori dan penjadwalan proses.

Kernel mengelola melalui penggunaan dua struktur kunci, tabel proses dan struktur pengguna. Tabel proses berisi parameter penjadwalan, gambar memori, sinyal dan kondisi proses lainnya. Struktur pengguna termasuk registri mesin, kondisi system call , tabel file descriptor, akunting dan stack kernel.

Tour Linux Kernel Source

Sumber: http://www.tldp.org/LDP/khg/HyperNews/get/tour/tour.html Oleh Alessandro Rubini, rubini@pop.systemy.it

Bagian ini mencoba untuk menjelaskan source code Linux secara teratur, mencoba membantu pembaca untuk mencapai pemahaman yang baik tentang bagaimana source code diletakkan dan bagaimana fitur unix yang paling relevan diimplementasikan. Targetnya adalah untuk membantu programmer C berpengalaman yang tidak terbiasa dengan Linux dalam mendapatkan akrab dengan desain Linux secara keseluruhan. Itulah sebabnya entry point yang dipilih untuk tur kernel adalah entry point kernel sendiri: sistem boot.

Pemahaman yang baik tentang bahasa C diperlukan untuk memahami materi, serta familiar dengan konsep Unix dan arsitektur PC. Namun, tidak ada kode C akan muncul dalam bagian ini, melainkan pointer ke source code sebenarnya. Bagian ini cenderung untuk memberikan gambaran informal.

Setiap pathname untuk file direferensikan dalam bagian ini mengacu pada direktori utama, biasanya

/usr/src/linux


Booting System

Saat PC di nyalakan, processor 80x86 akan beroperasi menggunakan mode real dan menjalankan code di address 0xFFFF0, yang merupakan address ROM-BIOS. PC BIOS menjalankan beberapa test pada sistem dan menginisialisasi vector interupsi pada address fisik 0. Setelah itu, PC akan memasukan sektor pertama dari bootable device ke 0x7C00, dan jump ke situ. Device tersebut biasanya floppy atau hard drive. Memang ini terlihat sederhana, tapi memang hanya itu yang dibutuhkan untuk mengerti bagaimana awal kernel beroperasi.

Bagian pertama dari Linux kernel di tulis dalam bahasa assembler 8086 (boot/bootsect.S). Jika dijalankan, dia akan pindah ke address absolut di 0x90000, load 2 kBytes dari code selanjutnya dari boot device ke address 0x90200, dan seluruh sisa kernel ke address 0x10000. Message "Loading..." akan ditampilkan saat sistem load. Kontrol diberikan kepada code boot/Setup.S, real-mode assembly source yang lain.

Bagian dari setup mengidentifikasi fitur dari mesin dan tipe dari VGA. Jika di minta, dia juga akan menanyakan kepada pengguna untuk memilih mode video untuk console. Selanjutnya, dia akan memindahkan seluruh sistem dari address 0x10000 ke address 0x1000, masuk ke mode protected dan jump ke sistem yang ada 0x1000.

Langkah selanjutnya adalah proses dekompresi kernel. Code berada pada 0x1000 berasal dari zBoot/head.S yang akan menginisialisasi register dan menjalankan decompress_kernel(), yang akan membuat zBoot/inflate.c, zBoot/unzip.c dan zBoot/misc.c. Data yang sudah di-dekompres masuk ke address 0x100000 (1 Meg), dan hal ini yang menyebabkan Linux biasanya tidak dapat di jalankan di RAM kurang dari 2MB.

Enkapsulasi kernel dalam gzip file dilakukan oleh Makefile dan utility di directory zBoot. Cukup menarik untuk dilihat.

Kernel release 1.1.75 memindahkan directory boot dan zBoot ke arch/i386/boot. Perubahan ini untuk memungkinkan kernel bagi arsitektur yang berbeda. Pada bagian ini, informasi yang diberikan akan spesifik untuk i386.

Decompressed code di execute pada address address 0x1010000 [kemungkinan sekarang berbeda], yang mana semua setup 32bit dilakukan: IDT, GDT dan LDT di load, processor dan coprocessor di identifikasi, paging dilakukan, akhirnya, routine start_kernel dijalankan. Source dari operasi ini adalah boot/head.S. Bagian ini barangkali merupakan bagian paling tricky dari kernel secara keseluruhan.

Perlu dicatat bahwa jika terjadi error pada tahapan di atas, komputer akan lockup. Karena sistem operasi tidak dapat mengatasi error karena sistem operasi belum beroperasi dengan penuh.

start_kernel() ada di init/main.c, dan tidak pernah return. Selanjutnya semua code menggunakan bahasa C, diluar manajemen interrupt dan system call masuk / keluar (sebetulnya kebanyakan marco juga embed assembly code).

Perputaran Roda Sistem Operasi

Setelah mengatasi berbagai hal yang tricky, start_kernel() menginisialisasi semua bagian dari kernel, terutama,

  • Set batas memory dan call ke paging_init().
  • Inisialisasi traip, IRQ kanal dan scheduling.
  • Parsing command lnine.
  • Jika diminta, alokasi profiling buffer.
  • Inisialisasi semua device driver dan disk buffer, juga berbagai bagian kecil lainnya.
  • Kalibrasi delay loop (biasanya di hitung sebagai angka ``BogoMips).
  • Cek apakah interrupt 16 bekerja dengan baik dengan coprocessor.

Akhirnya, kernel siap untuk move_to_user_mode(), untuk mem-fork proses init, yang code-nya berada pada source file yang sama. Process nomor 0 yang juga dikenal sebagai idle task akan terus jalan sebagai infinite idle loop.

Proses init akan berusaha untuk menjalankan /etc/init, atau /bin/init, atau /sbin/init.

Jika tidak ada yang berhasil, code akan menjalankan "/bin/sh /etc/rc" dan fork root shell di terminal pertama. Code ini digunakan sejak Linux 0.01, saat itu sistem operasi hanya berupa kernel saja, dan proses tanpa login tersedia.

Setelah exec() program init dari salah satu lokasi standard (kita asumsikan kita memiliki salah satunya), kernel tidak punya kontrol secara langsung terhadap aliran program. Fungsi kernel, selanjutnya adalah untuk memfasilitasi proses melalui system call, juga memberikan layanan untuk kejadian asinkron (seperti interupsi hardware). Multitasking sudah di setip, dan selanjutnya init akan memanaje akses multiuser melalui fork() system daemon dan proses login.

Karena kernel bertanggung jawab untuk memberikan layanan, pembahasan ini akan dilanjutkan dengan melihat pada layanan tersebut (yaitu "system calls"), juga memberikan gambaran umum tentang struktur data di bawahnya maupun organisasi dari code.

Bagaimana Kernel Melihar Proses

Dari sudur pandang kernel, sebuah proses tidak lebih dari sebuah entry pada tabel proses.

Oleh karenanya, tabel process adalah salah satu tabel paling penting dalam struktur data di system, bersama dengan tabel memory-management dan buffer cache. Satu-satunya hal (item) yang di simpan di tabel proses adalah struktur task_struct, lumayan besar sekali, di definisikan di definisikan di include/linux/sched.h. Dalam task_struct baik informasi low-level dan high-level disimpan -- mulai dari copy dari register hardware hingga inode dari directory tempat beroperasinya proses.

Tabel process dapat berupa array dan double-link list, juga tree. Implementasi fisik dapat berupa static array dari pointer, yang panjangnya adalah NR_TASKS, sebuah konstanta yang di definisikan di include/linux/tasks.h, dan setiap struktur berada di bagian page reserved memory. Struktur list diperoleh melalui pointer next_task dan prev_task, sedangkan struktur tree cukup kompleks dan tidak akan di terangkan disini. Kita mungkin ingin mengubah nilai NR_TASKS dari 128, tapi pastikan untuk memaksa mengcompile semua file dependensi yang tergantung kepadanya.

Setelah booting selesai, kernel akan selalu bekerja atas nama dari salah satu proses, dan global variable current, sebuah pointer ke item task_struct, digunakan untuk mencatat proses yang sedang berjalan. current hanya akan diubah oleh scheduler, di kernel/sched.c. Akan tetapi, semua proses harus di lihat, macro for_each_task digunakan. Hal ini jauh lebih cepat daripada scan array secara berurutan, jika system dalam kondisi beban yang ringan.

Sebuah proses harus dijalankan dalam mode user mode atau kernel mode. Badan utama dari user program akan di execute dalam user mode dan system call akan di execute dalam kernel mode. stack yang digunakan oleh process dalam dua mode exekusi juga berbeda -- stack segmen konvensional digunakan dalam user mode, sedang fixed-size stack (satu page,dimiliki oleh process) digunakan dalam kernel mode. Page kernel stack tidak pernah di swap, karena dia harus tersedia jika sewaktu-waktu system call memanggil.

System calls, dalam kernel, berupa fungsi dalam bahasa C, nama 'official' biasanya menggunakan prefix `sys_'. Sebuah system call diberi nama, sebagai contoh, untuk burnout akan memanggil fungsi kernel sys_burnout().

Lihat for_each_task dan SET_LINKS, di include/linux/sched.h akan menolong untuk mengerti struktur list dan tree di tabel proses.

Pembuatan dan Penghancuran proses

Sebuah sistem unix akan membuat proses melalui system call fork(), dan terminasi proses melalui perintah exit() atau dengan memberikan signal. Implementasi Linux dari kedua perintah tersebut berada di kernel/fork.c dan kernel/exit.c.

Fork relatif mudah, dan fork.c lumayan pendek dan sangat mudah untuk dimengerti. Tugas utamanya adalah mengisi struktur data dari proses baru. Langkah yang dilakukan, selain mengisi kolom, adalah:

  • Mengambil page yang free untuk meletakan task_struct
  • Mencari slot proses yang kosong (find_empty_process())
  • Mengambil page yang free untuk kernel_stack_page
  • Copy LDT bapak ke anak
  • Duplikasi informasi mmap ke bapak

sys_fork() juga memanaje deskripsi file dan inode.

Kernel 1.0 memberikan dukungan vestigial untuk threading, dan fork() system call akan menunjukan beberapa hal untuk itu. Kernel thread adalah work-in-progress diluar kernel utama.

Keluar dari sebuah proses lebih triky, karena proses parent harus diberitahu akan adanya child yang berjalan. Tambah lagi, sebuah proses dapat exit jika di kill() oleh proses lain (hal ini adalah fitur Unix). File exit.c oleh karenanya merupakan tempat dari sys_kill() dan berbagai jenis sys_wait(), selain dari sys_exit().

Code dari exit.c tidak akan diterangkan disini -- karena tidak terlalu menarik. Intinya dia akan menangani dari perintah untuk keluar dari system secara konsisten. Standard POSIX, akan secara detail menerangkan tentang berbagai sinyal yang harus di tangani dalam proses keluar tersebut.

Menjalankan Program

Setelah menjalankan fork(), dua copy dari program yang sama akan jalan. Salah satu diantaranya biasanya akan meng-exec() program yang lain. System call exec() harus menemukan lokasi dari image binary dari file executable, me-load dan men-jalan-kannya. Kata "load" disini boleh tentu berarti "copy binary image ke memory", karena Linux mendukung demand loading.

Implementasi exec() di Linux mendukung beberapa format binary. Hal ini dapat dicapai melalui struktur linux_binfmt, dimana ada dua pointer ke fungsi -- satu untuk load executable dan yang lain untuk load library, setiap format binary me-representasikan executable maupun library. Loading dari shared library di implementasikan di source file yang sama dengan exec(), untuk sekarang kita fokus ke exec() saja.

Unix system memberikan programmer enam pola dari fungsi exec(). Semua kecuali satu dapat di implementasikan sebagai fungsi library, kernel Linux mengimplementasikan sys_execve() sendiri. Dia menjalankan tugas yang sangat sederhana, yaitu, load head dari executable, dan mencoba meng-execute-nya. Jika dua byte pertama adalah "#!", maka kalimat pertama akan di parsing dan interpreter akan di jalankan, selain itu format binary yang terdaftar akan berusaha di coba dijalankan.

Format Linux asli didukung secara langsung dalam fs/exec.c, dan fungsi yang relevan akan me-load binary aout_dan load library aout. Untuk binary, fungsi loading dari executable "a.out" akan berujung pada mmap() ke file disk, atau call read_exec(). mmap() digunakan oleh Linux jika dibutuhkan mekanisme loading ke page memory program saat di akses, sedangkan read_exec() digunakan jika memory mapping tidak didukung oleh filesystem host (contoh filesystem "msdos")

Kernel 1.1 memasukan revisi filesystem msdos, yang mendukung mmap(). Di samping itu, struct linux_binfmt menjadi linked list bukan sekedar array, untuk memungkinkan loading format binary baru sebagai kernel modul. Selanjutnya, structure yang ada juga di kembangkan untuk mengakses route core-dump yang berhubungan dengan format.

Akses ke File System

Kita semua tahu bahwa filesystem adalah sumber daya paling dasar dalam system Uniux, sangat mendasar dan ubiquitous sehingga membutuhkan nama yang mudah di ingat - untuk memudahkan kita akan menggunakan singkatan sederhana "fs".

Disini akan di asumsikan bahwa pembaca mengerti dasar fs Unix -- ijin akses, inode, superblock, mount dan umount. Konsep ini banyak di terangkan di literatur Unix lainnya. Kita akan banyak membahas isu spesifik fs di Linux.

Unix awal biasanya mendukung satu tipe fs saja, yang strukturnya tersebar dalam seluruh kernel. Pada hari ini, kita menggunakan interface standard antara kernel dengan fs, agar memudahkan pertukaran data pada berbagai arsitektur. Linux memberikan lapisan standard untuk menyampaikan informasi antara kernel dengan modul fs. Lapisan interface ini biasanya di sebut VFS, untuk "virtual filesystem".

Code filesystem oleh karenanya biasanya di pecah menjadi dua lapisan: lapisan atas memfokuskan diri pada manajemen dari tabel kernel dan struktur data, sedangkan lapisan bawah terdiri dari sekumpulan fungsi yang fs-dependent, yang di akses melalui struktur data VFS. Semua bagian yang fs-independent berada di file fs/*.c. Mereka menangani isu berikut ini:

  • Manaje buffer chache (buffer.c);
  • Respond ke system call fcntl() dan ioctl() (fcntl.c dan ioctl.c);
Mapping pipes and fifos on inodes and buffers (fifo.c, pipe.c);
Managing file- and inode-tables (file_table.c, inode.c);
Locking and unlocking files and records (locks.c);
Mapping names to inodes (namei.c, open.c);
Implementing the tricky select() function (select.c);
Providing information (stat.c);
mounting and umounting filesystems (super.c);
exec()ing executables and dumping cores (exec.c);
Loading the various binary formats (bin_fmt*.c, as outlined above). 

The VFS interface, then, consists of a set of relatively high-level operations which are invoked from the fs-independent code and are actually performed by each filesystem type. The most relevant structures are inode_operations and file_operations, though they're not alone: other structures exist as well. All of them are defined within include/linux/fs.h.

The kernel entry point to the actual file system is the structure file_system_type. An array of file_system_types is embodied within fs/filesystems.c and it is referenced whenever a mount is issued. The function read_super for the relevant fs type is then in charge of filling a struct super_block item, which in turn embeds a struct super_operations and a struct type_sb_info. The former provides pointers to generic fs operations for the current fs-type, the latter embeds specific information for the fs-type.

[NEW]The array of filesystem types has been turned in a linked list, to allow loading new fs types as kernel modules. The function (un-)register_filesystem is coded within fs/super.c.

Anatomi dari Tipe File System

The role of a filesystem type is to perform the low-level tasks used to map the relatively high level VFS operations on the physical media (disks, network or whatever). The VFS interface is flexible enough to allow support for both conventional Unix filesystems and exotic situations such as the msdos and umsdos types.

Each fs-type is made up of the following items, in addition to its own directory:

An entry in the file_systems[] array (fs/filesystems.c);
The superblock include file (include/linux/type_fs_sb.h);
The inode include file (include/linux/type_fs_i.h);
The generic own include file (include/linux/type_fs.h});
Two #include lines within include/linux/fs.h, as well as the entries in struct super_block and struct inode. 

The own directory for the fs type contains all the real code, responsible of inode and data management.

[MORE]The chapter about procfs in this guide uncovers all the details about low-level code and VFS interface for that fs type. Source code in fs/procfs is quite understandable after reading the chapter.

We'll now look at the internal workings of the VFS mechanism, and the minix filesystem source is used as a working example. I chose the minix type because it is small but complete; moreover, any other fs type in Linux derives from the minix one. The ext2 type, the de-facto standard in recent Linux installations, is much more complex than that and its exploration is left as an exercise for the smart reader.

When a minix-fs is mounted, minix_read_super fills the super_block structure with data read from the mounted device. The s_op field of the structure will then hold a pointer to minix_sops, which is used by the generic filesystem code to dispatch superblock operations.

Chaining the newly mounted fs in the global system tree relies on the following data items (assuming sb is the super_block structure and dir_i points to the inode for the mount point):

sb->s_mounted points to the root-dir inode of the mounted filesystem (MINIX_ROOT_INO);
dir_i->i_mount holds sb->s_mounted;
sb->s_covered holds dir_i 

Umounting will eventually be performed by do_umount, which in turn invokes minix_put_super.

Whenever a file is accessed, minix_read_inode comes into play; it fills the system-wide inode structure with fields coming form minix_inode. The inode->i_op field is filled according to inode->i_mode and it is responsible for any further operation on the file. The source for the minix functions just described are to be found in fs/minix/inode.c.

The inode_operations structure is used to dispatch inode operations (you guessed it) to the fs-type specific kernel functions; the first entry in the structure is a pointer to a file_operations item, which is the data-management equivalent of i_op. The minix fs-type allows three instances of inode-operation sets (for direcotries, for files and for symbolic links) and two instances of file-operation sets (symlinks don't need one).

Directory operations (minix_readdir alone) are to be found in fs/minix/dir.c; file operations (read and write) appear within fs/minix/file.c and symlink operations (reading and following the link) in fs/minix/symlink.c.

The rest of the minix directory implements the following tasks:

bitmap.c manages allocation and freeing of inodes and blocks (the ext2 fs, otherwise, has two different source files);
fsynk.c is responsible for the fsync() system calls--it manages direct, indirect and double indirect blocks (I assume you know about them, it's common Unix knowledge);
namei.c embeds all the name-related inode operations, such as creating and destroying nodes, renaming and linking;
truncate.c performs truncation of files.

The console driver

Being the main I/O device on most Linux boxes, the console driver deserves some attention. The source code related to the console, as well as the other character drivers, is to be found in drivers/char, and we'll use this very directory as our referenece point when naming files.

Console initialization is performed by the function tty_init(), in tty_io.c. This function is only concerned in getting major device numbers and calling the init function for each device set. con_init(), then is the one related to the console, and resides in console.c.

[NEW]Initialization of the console has changed quite a lot during 1.1 evolution. console_init() has been detatched from tty_init(), and is called directly by ../../main.c. The virtual consoles are now dynamically allocated, and quite a good deal of code has changed. So, I'll skip the details of initialization, allocation and such. How file operations are dispatched to the console

This paragraph is quite low-level, and can be happily skipped over.

Needless to say, a Unix device is accessed though the filesystem. This paragraph details all steps from the device file to the actual console functions. Moreover, the following information is extracted from the 1.1.73 source code, and it may be slightly different from the 1.0 source.

When a device inode is opened, the function chrdev_open() (or blkdev_open(), but we'll stich to character devices) in ../../fs/devices.c gets executed. This function is reached by means of the structure def_chr_fops, which in turn is referenced by chrdev_inode_operations, used by all the filesystem types (see the previous section about filesystems).

chrdev_open takes care of specifying the device operations by substituting the device specific file_operations table in the current filp and calls the specific open(). Device specific tables are kept in the array chrdevs[], indexed by the majour device number, and filled by the same ../../fs/devices.c.

If the device is a tty one (aren't we aiming at the console?), we come to the tty drivers, whose functions are in tty_io.c, indexed by tty_fops. Thus, tty_open() calls init_dev(), which allocates any data structure needed by the device, based on the minor device number.

The minor number is also used to retrieve the actual driver for the device, which has been registered through tty_register_driver(). The driver, then, is still another structure used to dispatch computation, just like file_ops; it is concerned with writing and controlling the device. The last data structure used in managing a tty is the line discipline, described later. The line discipline for the console (and any other tty device) is set by initialize_tty_struct(), invoked by init_dev.

Everything we touched in this paragraph is device-independent. The only console-specific particular is that console.c, has registered its own driver during con_init(). The line discipline, on the contrary, in independent of the device.

[MORE]The tty_driver structure is fully explained within <linux/tty_driver.h>.

[NEW]The above information has been extracted from 1.1.73 source code. It isn't unlikely for your kernel to be somewhat different (``This information is subject to change without notice).

Writing to the console

When a console device is written to, the function con_write gets invoked. This function manages all the control characters and escape sequences used to provide applications with complete screen management. The escape sequences implemented are those of the vt102 terminal; This means that your environment should say TERM=vt102 when you are telnetting to a non-Linux host; the best choice for local activities, however, is TERM=console because the Linux console offers a superset of vt102 functionality.

con_write(), thus, is mostly made up of nested switch statements, used to handle a finite state automaton interpreting escape sequences one character at a time. When in normal mode, the character being printed is written directly to the video memory, using the current attr-ibute. Within console.c, all the fields of struct vc are made accessible through macros, so any reference to (for example) attr, does actually refer to the field in the structure vc_cons[currcons], as long as currcons is the number of the console being referred to.

[NEW]Actually, vc_cons in newer kernels is no longer an array of structures , it now is an array of pointers whose contents are kmalloc()ed. The use of macros greatly simplified changing the approach, because much of the code didn't need to be rewritten.

Actual mapping and unmapping of the console memory to screen is performed by the functions set_scrmem() (which copies data from the console buffer to video memory) and get_scrmem (which copies back data to the console buffer). The private buffer of the current console is physically mapped on the actual video RAM, in order to minimize the number of data transfers. This means that get- and set-_scrmem() are static to console.c and are called only during a console switch. Reading the console

Reading the console is accomplished through the line-discipline. The default (and unique) line discipline in Linux is called tty_ldisc_N_TTY. The line discipline is what ``disciplines input through a line. It is another function table (we're used to the approach, aren't we?), which is concerned with reading the device. With the help of termios flags, the line discipline is what controls input from the tty: raw, cbreak and cooked mode; select(); ioctl() and so on.

The read function in the line discipline is called read_chan(), which reads the tty buffer independently of whence it came from. The reason is that character arrival through a tty is managed by asynchronous hardware interrupts.

[MORE]The line discipline N_TTY is to be found in the same tty_io.c, though later kernels use a different n_tty.c source file.

The lowest level of console input is part of keyboard management, and thus it is handled within keyboard.c, in the function keyboard_interrupt(). Keyboard management

Keyboard management is quite a nightmare. It is confined to the file keyboard.c, which is full of hexadecimal numbers to represent the various keycodes appearing in keyboards of different manifacturers.

I won't dig in keyboard.c, because no relevant information is there to the kernel hacker.

[MORE]For those readers who are really interested in the Linux keyboard, the best approach to keyboard.c is from the last line upward. Lowest level details occur mainly in the first half of the file. Switching the current console

The current console is switched through invocation of the function change_console(), which resides in tty_io.c and is invoked by both keyboard.c and vt.c (the former switches console in response to keypresses, the latter when a program requests it by invoking an ioctl() call).

The actual switching process is performed in two steps, and the function complete_change_console() takes care of the second part of it. Splitting the switch is meant to complete the task after a possible handshake with the process controlling the tty we're leaving. If the console is not under process control, change_console() calls complete_change_console() by itself. Process intervertion is needed to successfully switch from a graphic console to a text one and viceversa, and the X server (for example) is the controlling process of its own graphic console. The selection mechanism

``selection is the cut and paste facility for the Linux text consoles. The mechanism is mainly handled by a user-level process, which can be instantiated by either selection or gpm. The user-level program uses ioctl() on the console to tell the kernel to highlight a region of the screen. The selected text, then, is copied to a selection buffer. The buffer is a static entity in console.c. Pasting text is accomplished by `manually' pushing characters in the tty input queue. The whole selection mechanism is protected by #ifdef so users can disable it during kernel configuration to save a few kilobytes of ram.

Selection is a very-low-level facility, and its workings are hidden from any other kernel activity. This means that most #ifdef's simply deals with removing the highlight before the screen is modified in any way.

[NEW]Newer kernels feature improved code for selection, and the mouse pointer can be highlighted independently of the selected text (1.1.32 and later). Moreover, from 1.1.73 onward a dynamic buffer is used for selected text rather than a static one, making the kernel 4kB smaller. ioctl()ling the device

The ioctl() system call is the entry point for user processes to control the behaviour of device files. Ioctl management is spawned by ../../fs/ioctl.c, where the real sys_ioctl() resides. The standard ioctl requests are performed right there, other file-related requests are processed by file_ioctl() (same source file), while any other request is dispatches to the device-specific ioctl() function.

The ioctl material for console devices resides in vt.c, because the console driver dispatches ioctl requests to vt_ioctl().

[NEW]The information above refer to 1.1.7x. The 1.0 kernel doesn't have the ``driver table, and vt_ioctl() is pointed to directly by the file_operations() table.

Ioctl material is quite confused, indeed. Some requests are related to the device, and some are related to the line discipline. I'll try to summarize things for the 1.0 and the 1.1.7x kernels. Anything happened in between.

The 1.1.7x series features the following approach: tty_ioctl.c implements only line discipline requests (namely n_tty_ioctl(), which is the only n_tty function outside of n_tty.c), while the file_operations field points to tty_ioctl() in tty_io.c. If the request number is not resolved by tty_ioctl(), it is passed along to tty->driver.ioctl or, if it fails, to tty->ldisc.ioctl. Driver-related stuff for the console it to be found in vt.c, while line discipline material is in tty_ioctl.c.

In the 1.0 kernel, tty_ioctl() is in tty_ioctl.c and is pointed to by generic tty file_operations. Unresolved requests are passed along to the specific ioctl function or to the line-discipline code, in a way similar to 1.1.7x.

Note that in both cases, the TIOCLINUX request is in the device-independent code. This implies that the console selection can be set by ioctlling any tty (set_selection() always operates on the foreground console), and this is a security hole. It is also a good reason to switch to a newer kernel, where the problem is fixed by only allowing the superuser to handle the selection.

A variety of requests can be issued to the console device, and the best way to know about them is to browse the source file vt.c.

Referensi

Pranala Menarik