Compare commits

...

9 Commits

Author SHA1 Message Date
seewishnew
23daa3dabc Merge 128d456923 into 3527693aeb 2024-02-16 11:09:36 -08:00
Philipp Oppermann
3527693aeb Merge pull request #1296 from phil-opp/update-bootloader-version
Specify bootloader version without a patch version
2024-02-16 13:29:45 +01:00
Philipp Oppermann
4376233ec3 Update bootloader docs.rs links to always point to latest v0.9 version 2024-02-16 13:26:05 +01:00
Philipp Oppermann
1f6402f746 Specify bootloader version as v0.9 (without patch version) in all posts
Cargo automatically chooses the latest patch version, but beginners might not know that. So this hopefully avoids some confusion.
2024-02-16 13:25:04 +01:00
Philipp Oppermann
3556211904 Merge pull request #1295 from phil-opp/update-data-layout
Update data layouts of custom targets to LLVM 18
2024-02-16 13:15:03 +01:00
Philipp Oppermann
c31dcb48e5 Update data layouts of custom targets to LLVM 18 2024-02-16 13:11:03 +01:00
Vishnu C
128d456923 Minor corrections 2022-12-28 01:48:13 -08:00
Vishnu C
0652ed79c3 Minor edits and formatting corrections 2022-12-28 01:40:54 -08:00
Vishnu C
7500cac640 Adds code and documentation to rectify potential leaky headers in linked list allocator 2022-12-28 01:23:09 -08:00
13 changed files with 145 additions and 50 deletions

View File

@@ -505,7 +505,7 @@ A minimal target specification that describes the `x86_64-unknown-linux-gnu` tar
```json ```json
{ {
"llvm-target": "x86_64-unknown-linux-gnu", "llvm-target": "x86_64-unknown-linux-gnu",
"data-layout": "e-m:e-p270:32:32-p271:32:32-p272:64:64-i64:64-f80:128-n8:16:32:64-S128", "data-layout": "e-m:e-p270:32:32-p271:32:32-p272:64:64-i64:64-i128:128-f80:128-n8:16:32:64-S128",
"target-endian": "little", "target-endian": "little",
"target-pointer-width": "64", "target-pointer-width": "64",
"target-c-int-width": "32", "target-c-int-width": "32",
@@ -527,7 +527,7 @@ In order to disable the multimedia extensions, we create a new target named `x86
```json ```json
{ {
"llvm-target": "x86_64-unknown-linux-gnu", "llvm-target": "x86_64-unknown-linux-gnu",
"data-layout": "e-m:e-p270:32:32-p271:32:32-p272:64:64-i64:64-f80:128-n8:16:32:64-S128", "data-layout": "e-m:e-p270:32:32-p271:32:32-p272:64:64-i64:64-i128:128-f80:128-n8:16:32:64-S128",
"target-endian": "little", "target-endian": "little",
"target-pointer-width": "64", "target-pointer-width": "64",
"target-c-int-width": "32", "target-c-int-width": "32",

View File

@@ -98,7 +98,7 @@ Rust allows us to define [custom targets] through a JSON configuration file. A m
```json ```json
{ {
"llvm-target": "x86_64-unknown-linux-gnu", "llvm-target": "x86_64-unknown-linux-gnu",
"data-layout": "e-m:e-p270:32:32-p271:32:32-p272:64:64-i64:64-f80:128-n8:16:32:64-S128", "data-layout": "e-m:e-p270:32:32-p271:32:32-p272:64:64-i64:64-i128:128-f80:128-n8:16:32:64-S128",
"linker-flavor": "gcc", "linker-flavor": "gcc",
"target-endian": "little", "target-endian": "little",
"target-pointer-width": "64", "target-pointer-width": "64",
@@ -133,7 +133,7 @@ For our target system, we define the following JSON configuration in a file name
```json ```json
{ {
"llvm-target": "x86_64-unknown-none", "llvm-target": "x86_64-unknown-none",
"data-layout": "e-m:e-p270:32:32-p271:32:32-p272:64:64-i64:64-f80:128-n8:16:32:64-S128", "data-layout": "e-m:e-p270:32:32-p271:32:32-p272:64:64-i64:64-i128:128-f80:128-n8:16:32:64-S128",
"linker-flavor": "gcc", "linker-flavor": "gcc",
"target-endian": "little", "target-endian": "little",
"target-pointer-width": "64", "target-pointer-width": "64",

View File

@@ -122,7 +122,7 @@ rtl = true
```json ```json
{ {
"llvm-target": "x86_64-unknown-linux-gnu", "llvm-target": "x86_64-unknown-linux-gnu",
"data-layout": "e-m:e-p270:32:32-p271:32:32-p272:64:64-i64:64-f80:128-n8:16:32:64-S128", "data-layout": "e-m:e-p270:32:32-p271:32:32-p272:64:64-i64:64-i128:128-f80:128-n8:16:32:64-S128",
"arch": "x86_64", "arch": "x86_64",
"target-endian": "little", "target-endian": "little",
"target-pointer-width": "64", "target-pointer-width": "64",
@@ -145,7 +145,7 @@ rtl = true
```json ```json
{ {
"llvm-target": "x86_64-unknown-none", "llvm-target": "x86_64-unknown-none",
"data-layout": "e-m:e-p270:32:32-p271:32:32-p272:64:64-i64:64-f80:128-n8:16:32:64-S128", "data-layout": "e-m:e-p270:32:32-p271:32:32-p272:64:64-i64:64-i128:128-f80:128-n8:16:32:64-S128",
"arch": "x86_64", "arch": "x86_64",
"target-endian": "little", "target-endian": "little",
"target-pointer-width": "64", "target-pointer-width": "64",
@@ -204,7 +204,7 @@ For more information, see our post on [disabling SIMD](@/edition-2/posts/02-mini
```json ```json
{ {
"llvm-target": "x86_64-unknown-none", "llvm-target": "x86_64-unknown-none",
"data-layout": "e-m:e-p270:32:32-p271:32:32-p272:64:64-i64:64-f80:128-n8:16:32:64-S128", "data-layout": "e-m:e-p270:32:32-p271:32:32-p272:64:64-i64:64-i128:128-f80:128-n8:16:32:64-S128",
"arch": "x86_64", "arch": "x86_64",
"target-endian": "little", "target-endian": "little",
"target-pointer-width": "64", "target-pointer-width": "64",
@@ -414,7 +414,7 @@ pub extern "C" fn _start() -> ! {
# in Cargo.toml # in Cargo.toml
[dependencies] [dependencies]
bootloader = "0.9.23" bootloader = "0.9"
``` ```
افزودن بوت‌لودر به عنوان وابستگی برای ایجاد یک دیسک ایمیج قابل بوت کافی نیست. مشکل این است که ما باید هسته خود را با بوت لودر پیوند دهیم، اما کارگو از [اسکریپت های بعد از بیلد] پشتیبانی نمی‌کند. افزودن بوت‌لودر به عنوان وابستگی برای ایجاد یک دیسک ایمیج قابل بوت کافی نیست. مشکل این است که ما باید هسته خود را با بوت لودر پیوند دهیم، اما کارگو از [اسکریپت های بعد از بیلد] پشتیبانی نمی‌کند.

View File

@@ -118,7 +118,7 @@ Pour notre système cible toutefois, nous avons besoin de paramètres de configu
```json ```json
{ {
"llvm-target": "x86_64-unknown-linux-gnu", "llvm-target": "x86_64-unknown-linux-gnu",
"data-layout": "e-m:e-p270:32:32-p271:32:32-p272:64:64-i64:64-f80:128-n8:16:32:64-S128", "data-layout": "e-m:e-p270:32:32-p271:32:32-p272:64:64-i64:64-i128:128-f80:128-n8:16:32:64-S128",
"arch": "x86_64", "arch": "x86_64",
"target-endian": "little", "target-endian": "little",
"target-pointer-width": "64", "target-pointer-width": "64",
@@ -141,7 +141,7 @@ Nous pouvons aussi cibler les systèmes `x86_64` avec notre noyau, donc notre sp
```json ```json
{ {
"llvm-target": "x86_64-unknown-none", "llvm-target": "x86_64-unknown-none",
"data-layout": "e-m:e-p270:32:32-p271:32:32-p272:64:64-i64:64-f80:128-n8:16:32:64-S128", "data-layout": "e-m:e-p270:32:32-p271:32:32-p272:64:64-i64:64-i128:128-f80:128-n8:16:32:64-S128",
"arch": "x86_64", "arch": "x86_64",
"target-endian": "little", "target-endian": "little",
"target-pointer-width": "64", "target-pointer-width": "64",
@@ -201,7 +201,7 @@ Notre fichier de spécification de cible ressemble maintenant à ceci :
```json ```json
{ {
"llvm-target": "x86_64-unknown-none", "llvm-target": "x86_64-unknown-none",
"data-layout": "e-m:e-p270:32:32-p271:32:32-p272:64:64-i64:64-f80:128-n8:16:32:64-S128", "data-layout": "e-m:e-p270:32:32-p271:32:32-p272:64:64-i64:64-i128:128-f80:128-n8:16:32:64-S128",
"arch": "x86_64", "arch": "x86_64",
"target-endian": "little", "target-endian": "little",
"target-pointer-width": "64", "target-pointer-width": "64",

View File

@@ -116,7 +116,7 @@ Cargoは`--target`パラメータを使ってさまざまなターゲットを
```json ```json
{ {
"llvm-target": "x86_64-unknown-linux-gnu", "llvm-target": "x86_64-unknown-linux-gnu",
"data-layout": "e-m:e-p270:32:32-p271:32:32-p272:64:64-i64:64-f80:128-n8:16:32:64-S128", "data-layout": "e-m:e-p270:32:32-p271:32:32-p272:64:64-i64:64-i128:128-f80:128-n8:16:32:64-S128",
"arch": "x86_64", "arch": "x86_64",
"target-endian": "little", "target-endian": "little",
"target-pointer-width": "64", "target-pointer-width": "64",
@@ -139,7 +139,7 @@ Cargoは`--target`パラメータを使ってさまざまなターゲットを
```json ```json
{ {
"llvm-target": "x86_64-unknown-none", "llvm-target": "x86_64-unknown-none",
"data-layout": "e-m:e-p270:32:32-p271:32:32-p272:64:64-i64:64-f80:128-n8:16:32:64-S128", "data-layout": "e-m:e-p270:32:32-p271:32:32-p272:64:64-i64:64-i128:128-f80:128-n8:16:32:64-S128",
"arch": "x86_64", "arch": "x86_64",
"target-endian": "little", "target-endian": "little",
"target-pointer-width": "64", "target-pointer-width": "64",
@@ -198,7 +198,7 @@ SIMDを無効化することによる問題に、`x86_64`における浮動小
```json ```json
{ {
"llvm-target": "x86_64-unknown-none", "llvm-target": "x86_64-unknown-none",
"data-layout": "e-m:e-p270:32:32-p271:32:32-p272:64:64-i64:64-f80:128-n8:16:32:64-S128", "data-layout": "e-m:e-p270:32:32-p271:32:32-p272:64:64-i64:64-i128:128-f80:128-n8:16:32:64-S128",
"arch": "x86_64", "arch": "x86_64",
"target-endian": "little", "target-endian": "little",
"target-pointer-width": "64", "target-pointer-width": "64",
@@ -411,7 +411,7 @@ pub extern "C" fn _start() -> ! {
# in Cargo.toml # in Cargo.toml
[dependencies] [dependencies]
bootloader = "0.9.23" bootloader = "0.9"
``` ```
bootloaderを依存として加えることだけでブータブルディスクイメージが実際に作れるわけではなく、私達のカーネルをコンパイル後にブートローダーにリンクする必要があります。問題は、cargoが[<ruby>ビルド後<rp> (</rp><rt>post-build</rt><rp>) </rp></ruby>にスクリプトを走らせる機能][post-build scripts]を持っていないことです。 bootloaderを依存として加えることだけでブータブルディスクイメージが実際に作れるわけではなく、私達のカーネルをコンパイル後にブートローダーにリンクする必要があります。問題は、cargoが[<ruby>ビルド後<rp> (</rp><rt>post-build</rt><rp>) </rp></ruby>にスクリプトを走らせる機能][post-build scripts]を持っていないことです。

View File

@@ -124,7 +124,7 @@ Cargo는 `--target` 인자를 통해 여러 컴파일 대상 시스템들을 지
```json ```json
{ {
"llvm-target": "x86_64-unknown-linux-gnu", "llvm-target": "x86_64-unknown-linux-gnu",
"data-layout": "e-m:e-p270:32:32-p271:32:32-p272:64:64-i64:64-f80:128-n8:16:32:64-S128", "data-layout": "e-m:e-p270:32:32-p271:32:32-p272:64:64-i64:64-i128:128-f80:128-n8:16:32:64-S128",
"arch": "x86_64", "arch": "x86_64",
"target-endian": "little", "target-endian": "little",
"target-pointer-width": "64", "target-pointer-width": "64",
@@ -148,7 +148,7 @@ Cargo는 `--target` 인자를 통해 여러 컴파일 대상 시스템들을 지
```json ```json
{ {
"llvm-target": "x86_64-unknown-none", "llvm-target": "x86_64-unknown-none",
"data-layout": "e-m:e-p270:32:32-p271:32:32-p272:64:64-i64:64-f80:128-n8:16:32:64-S128", "data-layout": "e-m:e-p270:32:32-p271:32:32-p272:64:64-i64:64-i128:128-f80:128-n8:16:32:64-S128",
"arch": "x86_64", "arch": "x86_64",
"target-endian": "little", "target-endian": "little",
"target-pointer-width": "64", "target-pointer-width": "64",
@@ -209,7 +209,7 @@ SIMD 레지스터 값들을 메모리에 백업하고 또 다시 복구하는
```json ```json
{ {
"llvm-target": "x86_64-unknown-none", "llvm-target": "x86_64-unknown-none",
"data-layout": "e-m:e-p270:32:32-p271:32:32-p272:64:64-i64:64-f80:128-n8:16:32:64-S128", "data-layout": "e-m:e-p270:32:32-p271:32:32-p272:64:64-i64:64-i128:128-f80:128-n8:16:32:64-S128",
"arch": "x86_64", "arch": "x86_64",
"target-endian": "little", "target-endian": "little",
"target-pointer-width": "64", "target-pointer-width": "64",
@@ -418,7 +418,7 @@ pub extern "C" fn _start() -> ! {
# Cargo.toml 에 들어갈 내용 # Cargo.toml 에 들어갈 내용
[dependencies] [dependencies]
bootloader = "0.9.23" bootloader = "0.9"
``` ```
부트로더를 의존 크레이트로 추가하는 것만으로는 부팅 가능한 디스크 이미지를 만들 수 없습니다. 커널 컴파일이 끝난 후 커널을 부트로더와 함께 링크할 수 있어야 하는데, cargo는 현재 [빌드 직후 스크립트 실행][post-build scripts] 기능을 지원하지 않습니다. 부트로더를 의존 크레이트로 추가하는 것만으로는 부팅 가능한 디스크 이미지를 만들 수 없습니다. 커널 컴파일이 끝난 후 커널을 부트로더와 함께 링크할 수 있어야 하는데, cargo는 현재 [빌드 직후 스크립트 실행][post-build scripts] 기능을 지원하지 않습니다.

View File

@@ -112,7 +112,7 @@ For our target system, however, we require some special configuration parameters
```json ```json
{ {
"llvm-target": "x86_64-unknown-linux-gnu", "llvm-target": "x86_64-unknown-linux-gnu",
"data-layout": "e-m:e-p270:32:32-p271:32:32-p272:64:64-i64:64-f80:128-n8:16:32:64-S128", "data-layout": "e-m:e-p270:32:32-p271:32:32-p272:64:64-i64:64-i128:128-f80:128-n8:16:32:64-S128",
"arch": "x86_64", "arch": "x86_64",
"target-endian": "little", "target-endian": "little",
"target-pointer-width": "64", "target-pointer-width": "64",
@@ -135,7 +135,7 @@ We also target `x86_64` systems with our kernel, so our target specification wil
```json ```json
{ {
"llvm-target": "x86_64-unknown-none", "llvm-target": "x86_64-unknown-none",
"data-layout": "e-m:e-p270:32:32-p271:32:32-p272:64:64-i64:64-f80:128-n8:16:32:64-S128", "data-layout": "e-m:e-p270:32:32-p271:32:32-p272:64:64-i64:64-i128:128-f80:128-n8:16:32:64-S128",
"arch": "x86_64", "arch": "x86_64",
"target-endian": "little", "target-endian": "little",
"target-pointer-width": "64", "target-pointer-width": "64",
@@ -195,7 +195,7 @@ Our target specification file now looks like this:
```json ```json
{ {
"llvm-target": "x86_64-unknown-none", "llvm-target": "x86_64-unknown-none",
"data-layout": "e-m:e-p270:32:32-p271:32:32-p272:64:64-i64:64-f80:128-n8:16:32:64-S128", "data-layout": "e-m:e-p270:32:32-p271:32:32-p272:64:64-i64:64-i128:128-f80:128-n8:16:32:64-S128",
"arch": "x86_64", "arch": "x86_64",
"target-endian": "little", "target-endian": "little",
"target-pointer-width": "64", "target-pointer-width": "64",
@@ -403,7 +403,7 @@ Instead of writing our own bootloader, which is a project on its own, we use the
# in Cargo.toml # in Cargo.toml
[dependencies] [dependencies]
bootloader = "0.9.23" bootloader = "0.9"
``` ```
Adding the bootloader as a dependency is not enough to actually create a bootable disk image. The problem is that we need to link our kernel with the bootloader after compilation, but cargo has no support for [post-build scripts]. Adding the bootloader as a dependency is not enough to actually create a bootable disk image. The problem is that we need to link our kernel with the bootloader after compilation, but cargo has no support for [post-build scripts].

View File

@@ -119,7 +119,7 @@ Cargo поддерживает различные целевые системы
```json ```json
{ {
"llvm-target": "x86_64-unknown-linux-gnu", "llvm-target": "x86_64-unknown-linux-gnu",
"data-layout": "e-m:e-p270:32:32-p271:32:32-p272:64:64-i64:64-f80:128-n8:16:32:64-S128", "data-layout": "e-m:e-p270:32:32-p271:32:32-p272:64:64-i64:64-i128:128-f80:128-n8:16:32:64-S128",
"arch": "x86_64", "arch": "x86_64",
"target-endian": "little", "target-endian": "little",
"target-pointer-width": "64", "target-pointer-width": "64",
@@ -142,7 +142,7 @@ Cargo поддерживает различные целевые системы
```json ```json
{ {
"llvm-target": "x86_64-unknown-none", "llvm-target": "x86_64-unknown-none",
"data-layout": "e-m:e-p270:32:32-p271:32:32-p272:64:64-i64:64-f80:128-n8:16:32:64-S128", "data-layout": "e-m:e-p270:32:32-p271:32:32-p272:64:64-i64:64-i128:128-f80:128-n8:16:32:64-S128",
"arch": "x86_64", "arch": "x86_64",
"target-endian": "little", "target-endian": "little",
"target-pointer-width": "64", "target-pointer-width": "64",
@@ -202,7 +202,7 @@ Cargo поддерживает различные целевые системы
```json ```json
{ {
"llvm-target": "x86_64-unknown-none", "llvm-target": "x86_64-unknown-none",
"data-layout": "e-m:e-p270:32:32-p271:32:32-p272:64:64-i64:64-f80:128-n8:16:32:64-S128", "data-layout": "e-m:e-p270:32:32-p271:32:32-p272:64:64-i64:64-i128:128-f80:128-n8:16:32:64-S128",
"arch": "x86_64", "arch": "x86_64",
"target-endian": "little", "target-endian": "little",
"target-pointer-width": "64", "target-pointer-width": "64",
@@ -411,7 +411,7 @@ pub extern "C" fn _start() -> ! {
# in Cargo.toml # in Cargo.toml
[dependencies] [dependencies]
bootloader = "0.9.23" bootloader = "0.9"
``` ```
Добавление загрузчика в качестве зависимости недостаточно для создания загрузочного образа диска. Проблема в том, что нам нужно связать наше ядро с загрузчиком после компиляции, но в cargo нет поддержки [скриптов после сборки][post-build scripts]. Добавление загрузчика в качестве зависимости недостаточно для создания загрузочного образа диска. Проблема в том, что нам нужно связать наше ядро с загрузчиком после компиляции, но в cargo нет поддержки [скриптов после сборки][post-build scripts].

View File

@@ -92,7 +92,7 @@ Nightly 版本的编译器允许我们在源码的开头插入**特性标签**
```json ```json
{ {
"llvm-target": "x86_64-unknown-linux-gnu", "llvm-target": "x86_64-unknown-linux-gnu",
"data-layout": "e-m:e-p270:32:32-p271:32:32-p272:64:64-i64:64-f80:128-n8:16:32:64-S128", "data-layout": "e-m:e-p270:32:32-p271:32:32-p272:64:64-i64:64-i128:128-f80:128-n8:16:32:64-S128",
"arch": "x86_64", "arch": "x86_64",
"target-endian": "little", "target-endian": "little",
"target-pointer-width": "64", "target-pointer-width": "64",
@@ -112,7 +112,7 @@ Nightly 版本的编译器允许我们在源码的开头插入**特性标签**
```json ```json
{ {
"llvm-target": "x86_64-unknown-none", "llvm-target": "x86_64-unknown-none",
"data-layout": "e-m:e-p270:32:32-p271:32:32-p272:64:64-i64:64-f80:128-n8:16:32:64-S128", "data-layout": "e-m:e-p270:32:32-p271:32:32-p272:64:64-i64:64-i128:128-f80:128-n8:16:32:64-S128",
"arch": "x86_64", "arch": "x86_64",
"target-endian": "little", "target-endian": "little",
"target-pointer-width": "64", "target-pointer-width": "64",
@@ -166,7 +166,7 @@ Nightly 版本的编译器允许我们在源码的开头插入**特性标签**
```json ```json
{ {
"llvm-target": "x86_64-unknown-none", "llvm-target": "x86_64-unknown-none",
"data-layout": "e-m:e-p270:32:32-p271:32:32-p272:64:64-i64:64-f80:128-n8:16:32:64-S128", "data-layout": "e-m:e-p270:32:32-p271:32:32-p272:64:64-i64:64-i128:128-f80:128-n8:16:32:64-S128",
"arch": "x86_64", "arch": "x86_64",
"target-endian": "little", "target-endian": "little",
"target-pointer-width": "64", "target-pointer-width": "64",
@@ -365,7 +365,7 @@ pub extern "C" fn _start() -> ! {
# in Cargo.toml # in Cargo.toml
[dependencies] [dependencies]
bootloader = "0.9.23" bootloader = "0.9"
``` ```
只添加引导程序为依赖项,并不足以创建一个可引导的磁盘映像;我们还需要内核编译完成之后,将内核和引导程序组合在一起。然而,截至目前,原生的 cargo 并不支持在编译完成后添加其它步骤(详见[这个 issue](https://github.com/rust-lang/cargo/issues/545))。 只添加引导程序为依赖项,并不足以创建一个可引导的磁盘映像;我们还需要内核编译完成之后,将内核和引导程序组合在一起。然而,截至目前,原生的 cargo 并不支持在编译完成后添加其它步骤(详见[这个 issue](https://github.com/rust-lang/cargo/issues/545))。

View File

@@ -281,7 +281,7 @@ frame.map(|frame| frame.start_address() + u64::from(addr.page_offset()))
```toml ```toml
[dependencies] [dependencies]
bootloader = { version = "0.9.23", features = ["map_physical_memory"]} bootloader = { version = "0.9", features = ["map_physical_memory"]}
``` ```
この機能を有効化すると、ブートローダは物理メモリの全体を、ある未使用の仮想アドレス空間にマッピングします。この仮想アドレスの範囲をカーネルに伝えるために、ブートローダは**boot information**構造体を渡します。 この機能を有効化すると、ブートローダは物理メモリの全体を、ある未使用の仮想アドレス空間にマッピングします。この仮想アドレスの範囲をカーネルに伝えるために、ブートローダは**boot information**構造体を渡します。
@@ -291,7 +291,7 @@ bootloader = { version = "0.9.23", features = ["map_physical_memory"]}
`bootloader`クレートは、カーネルに渡されるすべての情報を格納する[`BootInfo`]構造体を定義しています。この構造体はまだ開発の初期段階にあり、将来の[対応していないsemverの][semver-incompatible]ブートローダのバージョンに更新した際には、うまく動かなくなることが予想されます。`map_physical_memory` featureが有効化されているので、いまこれは`memory_map``physical_memory_offset`という2つのフィールドを持っています `bootloader`クレートは、カーネルに渡されるすべての情報を格納する[`BootInfo`]構造体を定義しています。この構造体はまだ開発の初期段階にあり、将来の[対応していないsemverの][semver-incompatible]ブートローダのバージョンに更新した際には、うまく動かなくなることが予想されます。`map_physical_memory` featureが有効化されているので、いまこれは`memory_map``physical_memory_offset`という2つのフィールドを持っています
[`BootInfo`]: https://docs.rs/bootloader/0.9.3/bootloader/bootinfo/struct.BootInfo.html [`BootInfo`]: https://docs.rs/bootloader/0.9/bootloader/bootinfo/struct.BootInfo.html
[semver-incompatible]: https://doc.rust-lang.org/stable/cargo/reference/specifying-dependencies.html#caret-requirements [semver-incompatible]: https://doc.rust-lang.org/stable/cargo/reference/specifying-dependencies.html#caret-requirements
- `memory_map`フィールドは、利用可能な物理メモリの情報の概要を保持しています。システムの利用可能な物理メモリがどのくらいかや、どのメモリ領域がVGAハードウェアのようなデバイスのために予約されているかをカーネルに伝えます。これらのメモリマッピングはBIOSやUEFIファームウェアから取得できますが、それが可能なのはブートのごく初期に限られます。そのため、これらをカーネルが後で取得することはできないので、ブートローダによって提供する必要があるわけです。このメモリマッピングは後で必要となります。 - `memory_map`フィールドは、利用可能な物理メモリの情報の概要を保持しています。システムの利用可能な物理メモリがどのくらいかや、どのメモリ領域がVGAハードウェアのようなデバイスのために予約されているかをカーネルに伝えます。これらのメモリマッピングはBIOSやUEFIファームウェアから取得できますが、それが可能なのはブートのごく初期に限られます。そのため、これらをカーネルが後で取得することはできないので、ブートローダによって提供する必要があるわけです。このメモリマッピングは後で必要となります。

View File

@@ -278,7 +278,7 @@ We choose the first approach for our kernel since it is simple, platform-indepen
```toml ```toml
[dependencies] [dependencies]
bootloader = { version = "0.9.23", features = ["map_physical_memory"]} bootloader = { version = "0.9", features = ["map_physical_memory"]}
``` ```
With this feature enabled, the bootloader maps the complete physical memory to some unused virtual address range. To communicate the virtual address range to our kernel, the bootloader passes a _boot information_ structure. With this feature enabled, the bootloader maps the complete physical memory to some unused virtual address range. To communicate the virtual address range to our kernel, the bootloader passes a _boot information_ structure.
@@ -287,7 +287,7 @@ With this feature enabled, the bootloader maps the complete physical memory to s
The `bootloader` crate defines a [`BootInfo`] struct that contains all the information it passes to our kernel. The struct is still in an early stage, so expect some breakage when updating to future [semver-incompatible] bootloader versions. With the `map_physical_memory` feature enabled, it currently has the two fields `memory_map` and `physical_memory_offset`: The `bootloader` crate defines a [`BootInfo`] struct that contains all the information it passes to our kernel. The struct is still in an early stage, so expect some breakage when updating to future [semver-incompatible] bootloader versions. With the `map_physical_memory` feature enabled, it currently has the two fields `memory_map` and `physical_memory_offset`:
[`BootInfo`]: https://docs.rs/bootloader/0.9.3/bootloader/bootinfo/struct.BootInfo.html [`BootInfo`]: https://docs.rs/bootloader/0.9/bootloader/bootinfo/struct.BootInfo.html
[semver-incompatible]: https://doc.rust-lang.org/stable/cargo/reference/specifying-dependencies.html#caret-requirements [semver-incompatible]: https://doc.rust-lang.org/stable/cargo/reference/specifying-dependencies.html#caret-requirements
- The `memory_map` field contains an overview of the available physical memory. This tells our kernel how much physical memory is available in the system and which memory regions are reserved for devices such as the VGA hardware. The memory map can be queried from the BIOS or UEFI firmware, but only very early in the boot process. For this reason, it must be provided by the bootloader because there is no way for the kernel to retrieve it later. We will need the memory map later in this post. - The `memory_map` field contains an overview of the available physical memory. This tells our kernel how much physical memory is available in the system and which memory regions are reserved for devices such as the VGA hardware. The memory map can be queried from the BIOS or UEFI firmware, but only very early in the boot process. For this reason, it must be provided by the bootloader because there is no way for the kernel to retrieve it later. We will need the memory map later in this post.

View File

@@ -288,7 +288,7 @@ frame.map(|frame| frame.start_address() + u64::from(addr.page_offset()))
```toml ```toml
[dependencies] [dependencies]
bootloader = { version = "0.9.23", features = ["map_physical_memory"]} bootloader = { version = "0.9", features = ["map_physical_memory"]}
``` ```
启用这个功能后bootloader 将整个物理内存映射到一些未使用的虚拟地址范围。为了将虚拟地址范围传达给我们的内核bootloader 传递了一个 _启动信息_ 结构。 启用这个功能后bootloader 将整个物理内存映射到一些未使用的虚拟地址范围。为了将虚拟地址范围传达给我们的内核bootloader 传递了一个 _启动信息_ 结构。
@@ -298,7 +298,7 @@ bootloader = { version = "0.9.23", features = ["map_physical_memory"]}
`Bootloader` 板块定义了一个[`BootInfo`]结构,包含了它传递给我们内核的所有信息。这个结构还处于早期阶段,所以在更新到未来的 [semver-incompatible] bootloader 版本时,可能会出现一些故障。在启用 "map_physical_memory" 功能后,它目前有两个字段 "memory_map" 和 "physical_memory_offset"。 `Bootloader` 板块定义了一个[`BootInfo`]结构,包含了它传递给我们内核的所有信息。这个结构还处于早期阶段,所以在更新到未来的 [semver-incompatible] bootloader 版本时,可能会出现一些故障。在启用 "map_physical_memory" 功能后,它目前有两个字段 "memory_map" 和 "physical_memory_offset"。
[`BootInfo`]: https://docs.rs/bootloader/0.9.3/bootloader/bootinfo/struct.BootInfo.html [`BootInfo`]: https://docs.rs/bootloader/0.9/bootloader/bootinfo/struct.BootInfo.html
[semver-incompatible]: https://doc.rust-lang.org/stable/cargo/reference/specifying-dependencies.html#caret-requirements [semver-incompatible]: https://doc.rust-lang.org/stable/cargo/reference/specifying-dependencies.html#caret-requirements
- `memory_map`字段包含了可用物理内存的概览。它告诉我们的内核系统中有多少物理内存可用哪些内存区域被保留给设备如VGA硬件。内存图可以从BIOS或UEFI固件中查询但只能在启动过程的早期查询。由于这个原因它必须由引导程序提供因为内核没有办法在以后检索到它。在这篇文章的后面我们将需要内存图。 - `memory_map`字段包含了可用物理内存的概览。它告诉我们的内核系统中有多少物理内存可用哪些内存区域被保留给设备如VGA硬件。内存图可以从BIOS或UEFI固件中查询但只能在启动过程的早期查询。由于这个原因它必须由引导程序提供因为内核没有办法在以后检索到它。在这篇文章的后面我们将需要内存图。

View File

@@ -570,11 +570,26 @@ use super::align_up;
use core::mem; use core::mem;
impl LinkedListAllocator { impl LinkedListAllocator {
/// Aligns a given address up to a multiple of
/// `mem::align_of::<ListNode>, which is 8 bytes
/// for x86_64.
fn align_to_list_node(addr: usize) -> usize {
align_up(addr, mem::align_of::<ListNode>())
}
/// Checks to make sure that alignment and size conditions
/// to store a `ListNode` are guaranteed for a given region
/// [addr, addr + size).
fn is_valid_region(addr: usize, size: usize) -> bool {
addr == Self::align_to_list_node(addr) &&
size >= mem::size_of::<ListNode>()
}
/// Adds the given memory region to the front of the list. /// Adds the given memory region to the front of the list.
unsafe fn add_free_region(&mut self, addr: usize, size: usize) { unsafe fn add_free_region(&mut self, addr: usize, size: usize) {
// ensure that the freed region is capable of holding ListNode // ensure that the region is capable of holding ListNode
assert_eq!(align_up(addr, mem::align_of::<ListNode>()), addr); assert!(Self::is_valid_region(addr, size));
assert!(size >= mem::size_of::<ListNode>());
// create a new list node and append it at the start of the list // create a new list node and append it at the start of the list
let mut node = ListNode::new(size); let mut node = ListNode::new(size);
@@ -664,18 +679,34 @@ impl LinkedListAllocator {
fn alloc_from_region(region: &ListNode, size: usize, align: usize) fn alloc_from_region(region: &ListNode, size: usize, align: usize)
-> Result<usize, ()> -> Result<usize, ()>
{ {
let alloc_start = align_up(region.start_addr(), align); let mut alloc_start = align_up(region.start_addr(), align);
let alloc_end = alloc_start.checked_add(size).ok_or(())?;
if alloc_start != region.start_addr() {
// We have some potential wasted space at the beginning of the region
// that cannot be used due to alignment constraints. We want to be
// able to recycle this space as well in our linked list. Otherwise
// we may never be able to reclaim this space.
// We need to ensure that there is enough space up front for a `ListNode`
// so we need to realign alloc_start after `size_of::<ListNode>` bytes
// from `region.start_addr()`.
// In practice, this can occur in x86_64 only when align is set to 16 bytes.
let pushed_start_addr = region
.start_addr()
.checked_add(mem::size_of::<ListNode>())
.ok_or(())?;
alloc_start = align_up(pushed_start_addr, align);
}
let alloc_end = alloc_start.checked_add(size).ok_or(())?;
if alloc_end > region.end_addr() { if alloc_end > region.end_addr() {
// region too small // region too small
return Err(()); return Err(());
} }
let excess_size = region.end_addr() - alloc_end; let excess_size = region.end_addr() - alloc_end;
if excess_size > 0 && excess_size < mem::size_of::<ListNode>() { if excess_size > 0 && !Self::is_valid_region(alloc_end, excess_size) {
// rest of region too small to hold a ListNode (required because the // Improper alignment or the rest of region too small to hold a ListNode (required
// allocation splits the region in a used and a free part) // because the allocation splits the region into a used and up to two free parts).
return Err(()); return Err(());
} }
@@ -687,7 +718,16 @@ impl LinkedListAllocator {
First, the function calculates the start and end address of a potential allocation, using the `align_up` function we defined earlier and the [`checked_add`] method. If an overflow occurs or if the end address is behind the end address of the region, the allocation doesn't fit in the region and we return an error. First, the function calculates the start and end address of a potential allocation, using the `align_up` function we defined earlier and the [`checked_add`] method. If an overflow occurs or if the end address is behind the end address of the region, the allocation doesn't fit in the region and we return an error.
The function performs a less obvious check after that. This check is necessary because most of the time an allocation does not fit a suitable region perfectly, so that a part of the region remains usable after the allocation. This part of the region must store its own `ListNode` after the allocation, so it must be large enough to do so. The check verifies exactly that: either the allocation fits perfectly (`excess_size == 0`) or the excess size is large enough to store a `ListNode`. The function performs a couple of less obvious checks on top of that. When we first perform `align_up` we may get an `alloc_start` that is not the same as `region.start_addr()`. In this case, there can still be some free memory we need to keep track of between `region.start_addr()` (inclusive) to this initially aligned `alloc_start` (exclusive). We need to ensure that this region is suitable for storing a `ListNode` by performing the alignment and size checks in `is_valid_region`.
As `region.start_addr()` is guaranteed to satisfy the alignment condition of `ListNode`, we technically only need to guarantee that the size is not too small. We try and realign after accounting for this space to store one `ListNode` instance after `region.start_addr()`. This may end up pushing our end address out of our region, in which case this entire region we are checking will not be sufficient.
It is interesting to note that this situation can occur in one edge case in the 64-bit architecture we are targeting, where `align` is set to 16 bytes and `region.start_addr()` happens to be some number `16*n + 8`. `alloc_start` would then be set to `16*(n+1)`, leaving us `head_excess_size` of just 8 bytes, which would be insufficient to store the 16 bytes required for a `ListNode`.
We could also have some free memory between `alloc_end` (inclusive) to `region.end_addr()` (exclusive). Here `alloc_end` (in general) is not guaranteed to satisfy the alignment condition of `ListNode`, nor is there a guarantee that the remaining space is sufficient to store a `ListNode`. This check is necessary because most of the time an allocation does not fit a suitable region perfectly, so that a part of the region remains usable after the allocation. This part of the region must store its own `ListNode` after the allocation, so it must be large enough to do so, and it must satisfy the alignment condition, which is exactly what our `is_valid_region` method performs.
We shall soon see how we will actually modify the requested layout size and alignment in our implementation of `GlobalAlloc::alloc()` for the `LinkedListAllocator` to ensure that it additionally conforms to the alignment requirements for storing a `ListNode`. This is essential to ensure that `GlobalAllocator::dealloc()` can successfully add the region back into our linked list.
#### Implementing `GlobalAlloc` #### Implementing `GlobalAlloc`
@@ -712,10 +752,20 @@ unsafe impl GlobalAlloc for Locked<LinkedListAllocator> {
if let Some((region, alloc_start)) = allocator.find_region(size, align) { if let Some((region, alloc_start)) = allocator.find_region(size, align) {
let alloc_end = alloc_start.checked_add(size).expect("overflow"); let alloc_end = alloc_start.checked_add(size).expect("overflow");
let excess_size = region.end_addr() - alloc_end;
if excess_size > 0 { let start_addr = region.start_addr();
allocator.add_free_region(alloc_end, excess_size); let end_addr = region.end_addr();
let tail_excess_size = end_addr - alloc_end;
if tail_excess_size > 0 {
allocator.add_free_region(alloc_end, tail_excess_size);
} }
let head_excess_size = alloc_start - start_addr;
if head_excess_size > 0 {
allocator.add_free_region(start_addr, head_excess_size);
}
alloc_start as *mut u8 alloc_start as *mut u8
} else { } else {
ptr::null_mut() ptr::null_mut()
@@ -735,7 +785,7 @@ Let's start with the `dealloc` method because it is simpler: First, it performs
The `alloc` method is a bit more complex. It starts with the same layout adjustments and also calls the [`Mutex::lock`] function to receive a mutable allocator reference. Then it uses the `find_region` method to find a suitable memory region for the allocation and remove it from the list. If this doesn't succeed and `None` is returned, it returns `null_mut` to signal an error as there is no suitable memory region. The `alloc` method is a bit more complex. It starts with the same layout adjustments and also calls the [`Mutex::lock`] function to receive a mutable allocator reference. Then it uses the `find_region` method to find a suitable memory region for the allocation and remove it from the list. If this doesn't succeed and `None` is returned, it returns `null_mut` to signal an error as there is no suitable memory region.
In the success case, the `find_region` method returns a tuple of the suitable region (no longer in the list) and the start address of the allocation. Using `alloc_start`, the allocation size, and the end address of the region, it calculates the end address of the allocation and the excess size again. If the excess size is not null, it calls `add_free_region` to add the excess size of the memory region back to the free list. Finally, it returns the `alloc_start` address casted as a `*mut u8` pointer. In the success case, the `find_region` method returns a tuple of the suitable region (no longer in the list) and the start address of the allocation. Using `alloc_start`, the allocation size, and the end address of the region, it calculates the end address of the allocation and the excess free fragments that are usable again. If the excess sizes are not zero, it calls `add_free_region` to add the excess sizes of the memory regions back to the free list. Finally, it returns the `alloc_start` address casted as a `*mut u8` pointer.
#### Layout Adjustments #### Layout Adjustments
@@ -797,6 +847,51 @@ many_boxes_long_lived... [ok]
This shows that our linked list allocator is able to reuse freed memory for subsequent allocations. This shows that our linked list allocator is able to reuse freed memory for subsequent allocations.
Additionally, to test that we are not leaking any excess segments due to `alloc_start` realignment we can add a simple test case:
```rust
// in tests/heap_allocation.rs
#[test_case]
fn head_excess_reuse() {
#[derive(Debug, Clone, PartialEq, Eq)]
#[repr(C, align(8))]
struct A(u128, u64);
assert_eq!(8, align_of::<A>());
assert_eq!(24, size_of::<A>()); // 24 % 16 = 8
#[derive(Debug, Clone, PartialEq, Eq)]
#[repr(C, align(16))]
struct B(u128, u64);
assert_eq!(16, align_of::<B>());
assert_eq!(32, size_of::<B>());
let a1 = Box::new(A(1, 1));
let b1 = Box::new(B(1, 1));
let a2 = Box::new(A(2, 2));
assert_eq!(*a1, A(1, 1));
assert_eq!(*b1, B(1, 1));
assert_eq!(*a2, A(2, 2));
let a1_raw = Box::into_raw(a1) as usize;
let b1_raw = Box::into_raw(b1) as usize;
let a2_raw = Box::into_raw(a2) as usize;
assert_eq!(HEAP_START, a1);
assert_eq!(HEAP_START + 48, b1);
assert_eq!(HEAP_START + 24, a2);
}
```
In this test case we start off with two identical structs `A` and `B`, with different alignment requirements as specified in their struct `#[repr]` attributes. Instances of `A` will have addresses that are a multiple of 8 and those of `B` will have addresses that are a multiple of `16`.
`a1`, an instance of struct `A` on the heap, takes up space from `HEAP_START` to `HEAP_START + 24`, as `HEAP_START` is a multiple of 8 already. `b1` is an instance of struct `B` on the heap, but it needs an address that is a multiple of 16. Therefore, although `HEAP_START + 24` is available, our `alloc_from_region` will first attempt to set `alloc_start = HEAP_START + 32`. However, this will not leave enough room to store a `ListNode` in the 8 bytes between `HEAP_START + 24` and `HEAP_START + 32`. Next, it will attempt to set `alloc_start = HEAP_START + 48` to satisfy both the alignment constraint and to allow a `ListNode` to account for the excess size at the head end of this region.
Because we are adding the `head_excess_size` fragment after `tail_excess_size` fragment in our `alloc` implementation, and because our linked list implementation follows LIFO (Last In First Out) ordering, our linked list will first search the `head_excess_size` region first on a new heap alloc request. We exploit this fact in this test by trying to allocate `a2`, which is an instance of struct `A`, which should fit neatly in the 24 bytes that were recycled from `HEAP_START + 24` to `HEAP_START + 48` as a part of the `head_excess_size` fragment from the previous allocation for `b1`. We can see that in our final lines of this test we are leaking these Boxed pointers and casting them to `usize` to help perform these assertions to ensure that our linked list allocator accounted for all the excess fragments.
### Discussion ### Discussion
In contrast to the bump allocator, the linked list allocator is much more suitable as a general-purpose allocator, mainly because it is able to directly reuse freed memory. However, it also has some drawbacks. Some of them are only caused by our basic implementation, but there are also fundamental drawbacks of the allocator design itself. In contrast to the bump allocator, the linked list allocator is much more suitable as a general-purpose allocator, mainly because it is able to directly reuse freed memory. However, it also has some drawbacks. Some of them are only caused by our basic implementation, but there are also fundamental drawbacks of the allocator design itself.