diff --git a/blog/content/second-edition/posts/10-heap-allocation/allocation-fragmentation.svg b/blog/content/second-edition/posts/10-heap-allocation/allocation-fragmentation.svg
new file mode 100644
index 00000000..b9c74784
--- /dev/null
+++ b/blog/content/second-edition/posts/10-heap-allocation/allocation-fragmentation.svg
@@ -0,0 +1,2 @@
+
+
\ No newline at end of file
diff --git a/blog/content/second-edition/posts/10-heap-allocation/index.md b/blog/content/second-edition/posts/10-heap-allocation/index.md
index 3d8c815a..82f9d672 100644
--- a/blog/content/second-edition/posts/10-heap-allocation/index.md
+++ b/blog/content/second-edition/posts/10-heap-allocation/index.md
@@ -725,527 +725,22 @@ While a bump allocator is seldom used as the global allocator, the principle of
[arena allocation]: https://mgravell.github.io/Pipelines.Sockets.Unofficial/docs/arenas.html
[`toolshed`]: https://docs.rs/toolshed/0.8.1/toolshed/index.html
-TODO explanation
+#### Reusing Freed Memory?
-TODO &self problem
+The main limitation of a bump allocator is that it never reuses deallocated memory. The question is: Can we extend our bump allocator somehow to remove this limitation?
-In this form the allocator is not useful since the deallocated memory is never freed. This isn't something that we can add easily, because memory regions might be deallocated in a different order than they were allocated. So a single pointer does not suffice to hold this information.
+As we learned at the beginning of this post, allocations can live arbitarily long and can be freed in an arbitrary order. This means that we need to keep track of a potentially unbounded number of non-continuous, unused memory regions, as illustrated by the following example:
-However, there is one possible way to free memory with a bump allocator: We can reset the whole heap after the last memory region is deallocated. To implement this, we add an additional counter field:
+
-TODO
+The graphic shows the heap over the course of time. At the beginning, the complete heap is unused and the `next` address is equal to `heap_start` (line 1). Then the first allocation occurs (line 2). In line 3, a second memory block is allocated and the first allocation is freed. Many more allocations are added in line 4. Half of them are very short-lived and already get freed in line 5, where also another new allocation is added.
-### Bitmap Allocator
+Line 5 shows the fundamental problem: We have five unused memory regions with different sizes in total, but the `next` pointer can only point to the beginning of the last region. While we could store the start addresses and sizes of the other unused memory regions in an array of size 4 for this example, this isn't a general solution since we could easily create an example with 8, 16, or 1000 unused memory regions.
+
+Normally when we have a potentially unbounded number of items, we can just use a heap allocated collection. This isn't really possible in our case, since the heap allocator can't depend on itself (it would cause endless recursion or deadlocks). So we need to find a different solution.
### LinkedList Allocator
-### Bucket Allocator
-
-## Summary
-
-## What's next?
-
-
-
-
----------------
----------------
----------------
-
-# old
-
-A good allocator is fast and reliable. It also effectively utilizes the available memory and keeps [fragmentation] low. Furthermore, it works well for concurrent applications and scales to any number of processors. It even optimizes the memory layout with respect to the CPU caches to improve [cache locality] and avoid [false sharing].
-
-[cache locality]: http://docs.cray.com/books/S-2315-50/html-S-2315-50/qmeblljm.html
-[fragmentation]: https://en.wikipedia.org/wiki/Fragmentation_(computing)
-[false sharing]: http://mechanical-sympathy.blogspot.de/2011/07/false-sharing.html
-
-These requirements make good allocators pretty complex. For example, [jemalloc] has over 30.000 lines of code. This complexity is out of scope for our kernel, so we will create a much simpler allocator. Nevertheless, it should suffice for the foreseeable future, since we'll allocate only when it's absolutely necessary.
-
-## The Allocator Interface
-
-The allocator interface in Rust is defined through the [`Alloc` trait], which looks like this:
-
-[`Alloc` trait]: https://doc.rust-lang.org/nightly/alloc/allocator/trait.Alloc.html
-
-```rust
-pub unsafe trait Alloc {
- unsafe fn alloc(&mut self, layout: Layout) -> Result<*mut u8, AllocErr>;
- unsafe fn dealloc(&mut self, ptr: *mut u8, layout: Layout);
- […] // about 13 methods with default implementations
-}
-```
-
-The `alloc` method should allocate a memory block with the size and alignment given through `Layout` parameter. The `deallocate` method should free such memory blocks again. Both methods are `unsafe`, as is the trait itself. This has different reasons:
-
-- Implementing the `Alloc` trait is unsafe, because the implementation must satisfy a set of contracts. Among other things, pointers returned by `alloc` must point to valid memory and adhere to the `Layout` requirements.
-- Calling `alloc` is unsafe because the caller must ensure that the passed layout does not have size zero. I think this is because of compatibility reasons with existing C-allocators, where zero-sized allocations are undefined behavior.
-- Calling `dealloc` is unsafe because the caller must guarantee that the passed parameters adhere to the contract. For example, `ptr` must denote a valid memory block allocated via this allocator.
-
-To set the system allocator, the `global_allocator` attribute can be added to a `static` that implements `Alloc` for a shared reference of itself. For example:
-
-```rust
-#[global_allocator]
-static MY_ALLOCATOR: MyAllocator = MyAllocator {...};
-
-impl<'a> Alloc for &'a MyAllocator {
- unsafe fn alloc(&mut self, layout: Layout) -> Result<*mut u8, AllocErr> {...}
- unsafe fn dealloc(&mut self, ptr: *mut u8, layout: Layout) {...}
-}
-```
-
-Note that `Alloc` needs to be implemented for `&MyAllocator`, not for `MyAllocator`. The reason is that the `alloc` and `dealloc` methods require mutable `self` references, but there's no way to get such a reference safely from a `static`. By requiring implementations for `&MyAllocator`, the global allocator interface avoids this problem and pushes the burden of synchronization onto the user.
-
-## Including the alloc crate
-The `Alloc` trait is part of the `alloc` crate, which like `core` is a subset of Rust's standard library. Apart from the trait, the crate also contains the standard types that require allocations such as `Box`, `Vec` and `Arc`. We can include it through a simple `extern crate`:
-
-```rust
-// in src/lib.rs
-#![feature(alloc)] // the alloc crate is still unstable
-
-[...]
-
-#[macro_use]
-extern crate alloc;
-```
-
-We don't need to add anything to our Cargo.toml, since the `alloc` crate is part of the standard library and shipped with the Rust compiler. The `alloc` crate provides the [format!] and [vec!] macros, so we use `#[macro_use]` to import them.
-
-[format!]: //doc.rust-lang.org/nightly/collections/macro.format!.html
-[vec!]: https://doc.rust-lang.org/nightly/collections/macro.vec!.html
-
-When we try to compile our crate now, the following error occurs:
-
-```
-error[E0463]: can't find crate for `alloc`
- --> src/lib.rs:10:1
- |
-16 | extern crate alloc;
- | ^^^^^^^^^^^^^^^^^^^ can't find crate
-```
-
-The problem is that [`xargo`] only cross compiles `libcore` by default. To also cross compile the `alloc` crate, we need to create a file named `Xargo.toml` in our project root (right next to the `Cargo.toml`) with the following content:
-
-[`xargo`]: https://github.com/japaric/xargo
-
-```toml
-[target.x86_64-blog_os.dependencies]
-alloc = {}
-```
-
-This instructs `xargo` that we also need `alloc`. It still doesn't compile, since we need to define a global allocator in order to use the `alloc` crate:
-
-```
-error: no #[default_lib_allocator] found but one is required; is libstd not linked?
-```
-
-## A Bump Allocator
-
-For our first allocator, we start simple. We create a `memory::heap_allocator` module containing a so-called _bump allocator_:
-
-```rust
-// in src/memory/mod.rs
-
-mod heap_allocator;
-
-// in src/memory/heap_allocator.rs
-
-use alloc::heap::{Alloc, AllocErr, Layout};
-
-/// A simple allocator that allocates memory linearly and ignores freed memory.
-#[derive(Debug)]
-pub struct BumpAllocator {
- heap_start: usize,
- heap_end: usize,
- next: usize,
-}
-
-impl BumpAllocator {
- pub const fn new(heap_start: usize, heap_end: usize) -> Self {
- Self { heap_start, heap_end, next: heap_start }
- }
-}
-
-unsafe impl Alloc for BumpAllocator {
- unsafe fn alloc(&mut self, layout: Layout) -> Result<*mut u8, AllocErr> {
- let alloc_start = align_up(self.next, layout.align());
- let alloc_end = alloc_start.saturating_add(layout.size());
-
- if alloc_end <= self.heap_end {
- self.next = alloc_end;
- Ok(alloc_start as *mut u8)
- } else {
- Err(AllocErr::Exhausted{ request: layout })
- }
- }
-
- unsafe fn dealloc(&mut self, ptr: *mut u8, layout: Layout) {
- // do nothing, leak memory
- }
-}
-```
-
-We also need to add `#![feature(allocator_api)]` to our `lib.rs`, since the allocator API is still unstable.
-
-The `heap_start` and `heap_end` fields contain the start and end address of our kernel heap. The `next` field contains the next free address and is increased after every allocation. To `allocate` a memory block we align the `next` address using the `align_up` function (described below). Then we add up the desired `size` and make sure that we don't exceed the end of the heap. We use a saturating add so that the `alloc_end` cannot overflow, which could lead to an invalid allocation. If everything goes well, we update the `next` address and return a pointer to the start address of the allocation. Else, we return `None`.
-
-### Alignment
-In order to simplify alignment, we add `align_down` and `align_up` functions:
-
-``` rust
-/// Align downwards. Returns the greatest x with alignment `align`
-/// so that x <= addr. The alignment must be a power of 2.
-pub fn align_down(addr: usize, align: usize) -> usize {
- if align.is_power_of_two() {
- addr & !(align - 1)
- } else if align == 0 {
- addr
- } else {
- panic!("`align` must be a power of 2");
- }
-}
-
-/// Align upwards. Returns the smallest x with alignment `align`
-/// so that x >= addr. The alignment must be a power of 2.
-pub fn align_up(addr: usize, align: usize) -> usize {
- align_down(addr + align - 1, align)
-}
-```
-
-Let's start with `align_down`: If the alignment is a valid power of two (i.e. in `{1,2,4,8,…}`), we use some bitwise operations to return the aligned address. It works because every power of two has exactly one bit set in its binary representation. For example, the numbers `{1,2,4,8,…}` are `{1,10,100,1000,…}` in binary. By subtracting 1 we get `{0,01,011,0111,…}`. These binary numbers have a `1` at exactly the positions that need to be zeroed in `addr`. For example, the last 3 bits need to be zeroed for a alignment of 8.
-
-To align `addr`, we create a [bitmask] from `align-1`. We want a `0` at the position of each `1`, so we invert it using `!`. After that, the binary numbers look like this: `{…11111,…11110,…11100,…11000,…}`. Finally, we zero the correct bits using a binary `AND`.
-
-[bitmask]: https://en.wikipedia.org/wiki/Mask_(computing)
-
-Aligning upwards is simple now. We just increase `addr` by `align-1` and call `align_down`. We add `align-1` instead of `align` because we would otherwise waste `align` bytes for already aligned addresses.
-
-### Reusing Freed Memory
-The heap memory is limited, so we should reuse freed memory for new allocations. This sounds simple, but is not so easy in practice since allocations can live arbitrarily long (and can be freed in an arbitrary order). This means that we need some kind of data structure to keep track of which memory areas are free and which are in use. This data structure should be very optimized since it causes overheads in both space (i.e. it needs backing memory) and time (i.e. accessing and organizing it needs CPU cycles).
-
-Our bump allocator only keeps track of the next free memory address, which doesn't suffice to keep track of freed memory areas. So our only choice is to ignore deallocations and leak the corresponding memory. Thus our allocator quickly runs out of memory in a real system, but it suffices for simple testing. Later in this post, we will introduce a better allocator that does not leak freed memory.
-
-### Using it as System Allocator
-
-Above we saw that we can use a static allocator as system allocator through the `global_allocator` attribute:
-
-```rust
-#[global_allocator]
-static ALLOCATOR: MyAllocator = MyAllocator {...};
-```
-
-This requires an implementation of `Alloc` for `&MyAllocator`, i.e. a shared reference. If we try to add such an implementation for our bump allocator (`unsafe impl<'a> Alloc for &'a BumpAllocator`), we have a problem: Our `alloc` method requires updating the `next` field, which is not possible for a shared reference.
-
-One solution could be to put the bump allocator behind a Mutex and wrap it into a new type, for which we can implement `Alloc` for a shared reference:
-
-```rust
-struct LockedBumpAllocator(Mutex);
-
-impl<'a> Alloc for &'a LockedBumpAllocator {
- unsafe fn alloc(&mut self, layout: Layout) -> Result<*mut u8, AllocErr> {
- self.0.lock().alloc(layout)
- }
-
- unsafe fn dealloc(&mut self, ptr: *mut u8, layout: Layout) {
- self.0.lock().dealloc(ptr, layout)
- }
-}
-```
-
-However, there is a more interesting solution for our bump allocator that avoids locking altogether. The idea is to exploit that we only need to update a single `usize` field byusing an `AtomicUsize` type. This type uses special synchronized hardware instructions to ensure data race freedom without requiring locks.
-
-#### A lock-free Bump Allocator
-A lock-free implementation looks like this:
-
-```rust
-use core::sync::atomic::{AtomicUsize, Ordering};
-
-/// A simple allocator that allocates memory linearly and ignores freed memory.
-#[derive(Debug)]
-pub struct BumpAllocator {
- heap_start: usize,
- heap_end: usize,
- next: AtomicUsize,
-}
-
-impl BumpAllocator {
- pub const fn new(heap_start: usize, heap_end: usize) -> Self {
- // NOTE: requires adding #![feature(const_atomic_usize_new)] to lib.rs
- Self { heap_start, heap_end, next: AtomicUsize::new(heap_start) }
- }
-}
-
-unsafe impl<'a> Alloc for &'a BumpAllocator {
- unsafe fn alloc(&mut self, layout: Layout) -> Result<*mut u8, AllocErr> {
- loop {
- // load current state of the `next` field
- let current_next = self.next.load(Ordering::Relaxed);
- let alloc_start = align_up(current_next, layout.align());
- let alloc_end = alloc_start.saturating_add(layout.size());
-
- if alloc_end <= self.heap_end {
- // update the `next` pointer if it still has the value `current_next`
- let next_now = self.next.compare_and_swap(current_next, alloc_end,
- Ordering::Relaxed);
- if next_now == current_next {
- // next address was successfully updated, allocation succeeded
- return Ok(alloc_start as *mut u8);
- }
- } else {
- return Err(AllocErr::Exhausted{ request: layout })
- }
- }
- }
-
- unsafe fn dealloc(&mut self, ptr: *mut u8, layout: Layout) {
- // do nothing, leak memory
- }
-}
-```
-
-The implementation is a bit more complicated now. First, there is now a `loop` around the whole method body, since we might need multiple tries until we succeed (e.g. if multiple threads try to allocate at the same time). Also, the loads operation is an explicit method call now, i.e. `self.next.load(Ordering::Relaxed)` instead of just `self.next`. The ordering parameter makes it possible to restrict the automatic instruction reordering performed by both the compiler and the CPU itself. For example, it is used when implementing locks to ensure that no write to the locked variable happens before the lock is acquired. We don't have such requirements, so we use the less restrictive `Relaxed` ordering.
-
-The heart of this lock-free method is the `compare_and_swap` call that updates the `next` address:
-
-```rust
-...
-let next_now = self.next.compare_and_swap(current_next, alloc_end,
- Ordering::Relaxed);
-if next_now == current_next {
- // next address was successfully updated, allocation succeeded
- return Ok(alloc_start as *mut u8);
-}
-...
-```
-
-Compare-and-swap is a special CPU instruction that updates a variable with a given value if it still contains the value we expect. If it doesn't, it means that another thread updated the value simultaneously, so we need to try again. The important feature is that this happens in a single uninteruptible operation (thus the name `atomic`), so no partial updates or intermediate states are possible.
-
-In detail, `compare_and_swap` works by comparing `next` with the first argument and, in case they're equal, updates `next` with the second parameter (the previous value is returned). To find out whether a switch happened, we check the returned previous value of `next`. If it is equal to the first parameter, the values were swapped. Otherwise, we try again in the next loop iteration.
-
-#### Setting the Global Allocator
-
-Now we can define a static bump allocator, that we can set as system allocator:
-
-```rust
-pub const HEAP_START: usize = 0o_000_001_000_000_0000;
-pub const HEAP_SIZE: usize = 100 * 1024; // 100 KiB
-
-#[global_allocator]
-static HEAP_ALLOCATOR: BumpAllocator = BumpAllocator::new(HEAP_START,
- HEAP_START + HEAP_SIZE);
-```
-
-We use `0o_000_001_000_000_0000` as heap start address, which is the address starting at the second `P3` entry. It doesn't really matter which address we choose here as long as it's unused. We use a heap size of 100 KiB, which should be large enough for the near future.
-
-Putting the above in the `memory::heap_allocator` module would make most sense, but unfortunately there is currently a [weird bug][global allocator bug] in the global allocator implementation that requires putting the global allocator in the root module. I hope it's fixed soon, but until then we need to put the above lines in `src/lib.rs`. For that, we need to make the `memory::heap_allocator` module public and add an import for `BumpAllocator`. We also need to add the `#![feature(global_allocator)]` at the top of our `lib.rs`, since the `global_allocator` attribute is still unstable.
-
-[global allocator bug]: https://github.com/rust-lang/rust/issues/44113
-
-That's it! We have successfully created and linked a custom system allocator. Now we're ready to test it.
-
-### Testing
-
-We should be able to allocate memory on the heap now. Let's try it in our `rust_main`:
-
-```rust
-// in rust_main in src/lib.rs
-
-use alloc::boxed::Box;
-let heap_test = Box::new(42);
-```
-
-When we run it, a triple fault occurs and causes permanent rebooting. Let's try debug it using QEMU and objdump as described [in the previous post][qemu debugging]:
-
-[qemu debugging]: http://os.phil-opp.com/remap-the-kernel.html#debugging
-
-```
-> qemu-system-x86_64 -d int -no-reboot -cdrom build/os-x86_64.iso
-…
-check_exception old: 0xffffffff new 0xe
- 0: v=0e e=0002 i=0 cpl=0 IP=0008:0000000000102860 pc=0000000000102860
- SP=0010:0000000000116af0 CR2=0000000040000000
-…
-```
-Aha! It's a [page fault] \(`v=0e`) and was caused by the code at `0x102860`. The code tried to write (`e=0002`) to address `0x40000000`. This address is `0o_000_001_000_000_0000` in octal, which is the `HEAP_START` address defined above. Of course it page-faults: We have forgotten to map the heap memory to some physical memory.
-
-[page fault]: http://wiki.osdev.org/Exceptions#Page_Fault
-
-### Some Refactoring
-In order to map the heap cleanly, we do a bit of refactoring first. We move all memory initialization from our `rust_main` to a new `memory::init` function. Now our `rust_main` looks like this:
-
-```rust
-// in src/lib.rs
-
-#[no_mangle]
-pub extern "C" fn rust_main(multiboot_information_address: usize) {
- // ATTENTION: we have a very small stack and no guard page
- vga_buffer::clear_screen();
- println!("Hello World{}", "!");
-
- let boot_info = unsafe {
- multiboot2::load(multiboot_information_address)
- };
- enable_nxe_bit();
- enable_write_protect_bit();
-
- // set up guard page and map the heap pages
- memory::init(boot_info);
-
- use alloc::boxed::Box;
- let heap_test = Box::new(42);
-
- println!("It did not crash!");
-
- loop {}
-}
-```
-
-The `memory::init` function looks like this:
-
-```rust
-// in src/memory/mod.rs
-
-use multiboot2::BootInformation;
-
-pub fn init(boot_info: &BootInformation) {
- let memory_map_tag = boot_info.memory_map_tag().expect(
- "Memory map tag required");
- let elf_sections_tag = boot_info.elf_sections_tag().expect(
- "Elf sections tag required");
-
- let kernel_start = elf_sections_tag.sections()
- .filter(|s| s.is_allocated()).map(|s| s.addr).min().unwrap();
- let kernel_end = elf_sections_tag.sections()
- .filter(|s| s.is_allocated()).map(|s| s.addr + s.size).max()
- .unwrap();
-
- println!("kernel start: {:#x}, kernel end: {:#x}",
- kernel_start,
- kernel_end);
- println!("multiboot start: {:#x}, multiboot end: {:#x}",
- boot_info.start_address(),
- boot_info.end_address());
-
- let mut frame_allocator = AreaFrameAllocator::new(
- kernel_start as usize, kernel_end as usize,
- boot_info.start_address(), boot_info.end_address(),
- memory_map_tag.memory_areas());
-
- paging::remap_the_kernel(&mut frame_allocator, boot_info);
-}
-```
-
-We've just moved the code to a new function. However, we've sneaked some improvements in:
-
-- An additional `.filter(|s| s.is_allocated())` in the calculation of `kernel_start` and `kernel_end`. This ignores all sections that aren't loaded to memory (such as debug sections). Thus, the kernel end address is no longer artificially increased by such sections.
-- We use the `start_address()` and `end_address()` methods of `boot_info` instead of calculating the adresses manually.
-- We use the alternate `{:#x}` form when printing kernel/multiboot addresses. Before, we used `0x{:x}`, which leads to the same result. For a complete list of these “alternate” formatting forms, check out the [std::fmt documentation].
-
-[std::fmt documentation]: https://doc.rust-lang.org/nightly/std/fmt/index.html#sign0
-
-### Safety
-It is important that the `memory::init` function is called only once, because it creates a new frame allocator based on kernel and multiboot start/end. When we call it a second time, a new frame allocator is created that reassigns the same frames, even if they are already in use.
-
-In the second call it would use an identical frame allocator to remap the kernel. The `remap_the_kernel` function would request a frame from the frame allocator to create a new page table. But the returned frame is already in use, since we used it to create our current page table in the first call. In order to initialize the new table, the function zeroes it. This is the point where everything breaks, since we zero our current page table. The CPU is unable to read the next instruction and throws a page fault.
-
-So we need to ensure that `memory::init` can be only called once. We could mark it as `unsafe`, which would bring it in line with Rust's memory safety rules. However, that would just push the unsafety to the caller. The caller can still accidentally call the function twice, the only difference is that the mistake needs to happen inside `unsafe` blocks.
-
-A better solution is to insert a check at the function's beginning, that panics if the function is called a second time. This approach has a small runtime cost, but we only call it once, so it's negligible. And we avoid two `unsafe` blocks (one at the calling site and one at the function itself), which is always good.
-
-In order to make such checks easy, I created a small crate named [once]. To add it, we run `cargo add once` and add the following to our `src/lib.rs`:
-
-[once]: https://crates.io/crates/once
-
-```rust
-// in src/lib.rs
-
-#[macro_use]
-extern crate once;
-```
-
-The crate provides an [assert_has_not_been_called!] macro (sorry for the long name :D). We can use it to fix the safety problem easily:
-
-[assert_has_not_been_called!]: https://docs.rs/once/0.3.2/once/macro.assert_has_not_been_called!.html
-
-``` rust
-// in src/memory/mod.rs
-
-pub fn init(boot_info: &BootInformation) {
- assert_has_not_been_called!("memory::init must be called only once");
-
- let memory_map_tag = ...
- ...
-}
-```
-That's it. Now our `memory::init` function can only be called once. The macro works by creating a static [AtomicBool] named `CALLED`, which is initialized to `false`. When the macro is invoked, it checks the value of `CALLED` and sets it to `true`. If the value was already `true` before, the macro panics.
-
-[AtomicBool]: https://doc.rust-lang.org/nightly/core/sync/atomic/struct.AtomicBool.html
-
-### Mapping the Heap
-Now we're ready to map the heap pages. In order to do it, we need access to the `ActivePageTable` or `Mapper` instance (see the [page table] and [kernel remapping] posts). For that we return it from the `paging::remap_the_kernel` function:
-
-[page table]: ./first-edition/posts/06-page-tables/index.md
-[kernel remapping]: ./first-edition/posts/07-remap-the-kernel/index.md
-
-```rust
-// in src/memory/paging/mod.rs
-
-pub fn remap_the_kernel(allocator: &mut A, boot_info: &BootInformation)
- -> ActivePageTable // new
- where A: FrameAllocator
-{
- ...
- println!("guard page at {:#x}", old_p4_page.start_address());
-
- active_table // new
-}
-```
-
-Now we have full page table access in the `memory::init` function. This allows us to map the heap pages to physical frames:
-
-```rust
-// in src/memory/mod.rs
-
-pub fn init(boot_info: &BootInformation) {
- ...
-
- let mut frame_allocator = ...;
-
- // below is the new part
-
- let mut active_table = paging::remap_the_kernel(&mut frame_allocator,
- boot_info);
-
- use self::paging::Page;
- use {HEAP_START, HEAP_SIZE};
-
- let heap_start_page = Page::containing_address(HEAP_START);
- let heap_end_page = Page::containing_address(HEAP_START + HEAP_SIZE-1);
-
- for page in Page::range_inclusive(heap_start_page, heap_end_page) {
- active_table.map(page, paging::WRITABLE, &mut frame_allocator);
- }
-}
-```
-
-The `Page::range_inclusive` function is just a copy of the `Frame::range_inclusive` function:
-
-```rust
-// in src/memory/paging/mod.rs
-
-#[derive(…, PartialEq, Eq, PartialOrd, Ord)]
-pub struct Page {...}
-
-impl Page {
- ...
- pub fn range_inclusive(start: Page, end: Page) -> PageIter {
- PageIter {
- start: start,
- end: end,
- }
- }
-}
-
pub struct PageIter {
start: Page,
end: Page,