diff --git a/blog/content/edition-3/posts/02-booting/index.md b/blog/content/edition-3/posts/02-booting/index.md index 424588f8..c59574ac 100644 --- a/blog/content/edition-3/posts/02-booting/index.md +++ b/blog/content/edition-3/posts/02-booting/index.md @@ -1178,5 +1178,6 @@ We used the `bootloader` and `bootloader_api` crates to convert our kernel to a Through advanced cargo features such as [workspaces](#creating-a-workspace), [build scripts](#using-the-diskimagebuilder), and [artifact dependencies](#adding-an-artifact-dependency), we created a nice build system that can bring us directly from source code to a running QEMU instance using a single command. We also started to look into frame buffers and [screen output](#screen-output). -In the next post, we will continue with this and learn how to draw shapes and render text. +In the [next post], we will continue with this and learn how to draw shapes and render text. +[next post]: @/edition-3/posts/03-screen-output/index.md diff --git a/blog/content/edition-3/posts/03-screen-output/index.md b/blog/content/edition-3/posts/03-screen-output/index.md index 9c970ba1..db38a1b2 100644 --- a/blog/content/edition-3/posts/03-screen-output/index.md +++ b/blog/content/edition-3/posts/03-screen-output/index.md @@ -12,7 +12,67 @@ icon = ''' + +This blog is openly developed on [GitHub]. +If you have any problems or questions, please open an issue there. +You can also leave comments [at the bottom]. +The complete source code for this post can be found in the [`post-3.3`][post branch] branch. + +[GitHub]: https://github.com/phil-opp/blog_os +[at the bottom]: #comments + +[post branch]: https://github.com/phil-opp/blog_os/tree/post-3.3 + + + +## Recap + +In the [previous post], we learned how to make our minimal kernel bootable. +Using the [`BootInfo`] provided by the bootloader, we were able to access a special memory region called the _[framebuffer]_, which controls the screen output. +We wrote some example code to display a gray stripe pattern: + +[previous post]: @/edition-3/posts/02-booting/index.md +[`BootInfo`]: https://docs.rs/bootloader_api/latest/bootloader_api/info/struct.BootInfo.html + +```rust +// in src/kernel/main.rs + +fn kernel_main(boot_info: &'static mut BootInfo) -> ! { + if let Some(framebuffer) = boot_info.framebuffer.as_mut() { + let mut value = 0x90; + for byte in framebuffer.buffer_mut() { + *byte = value; + value = value.wrapping_add(7); + } + } + loop {} +} +``` + +The reason that the above code affects the screen output is because the graphics card interprets the framebuffer memory as a [bitmap] image. +A bitmap describes an image through a predefined number of bits per pixel. +TODO + +[bitmap]: https://en.wikipedia.org/wiki/Bitmap + + +--- + + + + + + + draw shapes and pixels directly onto the framebuffer. That's fine and all, but how is one able to go from that to displaying text on the screen? Understanding this requires taking a deep dive into how characters are rendered behind the scenes. When a key is pressed on the keyboard, it sends a character code to the CPU. It's the CPU's job at that point to then interpret the character code and match it with an image to draw on the screen. The image is then sent to either the GPU or the framebuffer (the latter in our case) to be drawn on the screen, and the user sees that image as a letter, number, CJK character, emoji, or whatever else he or she wanted to have displayed by pressing that key.