1. 主页 > 区块链

beampig币猪币官方消息


欧易(OKX)交易所 - 全球顶尖数字货币交易平台

注册立即 领取价值高达 6,0000 元的数字货币盲盒,邀请码:vip1234,享受 20%手续费减免。

欧易注册 APP下载

bty

Tessy:中年人码农,做过电信公司、智能手机、安全可靠、晶片等金融行业,靠LinuxTNUMBERG50SX。

概要

Kdump 提供更多了一类监督机制在Mach再次出现机械故障的这时候把控制系统的大部份缓存重要信息和暂存器重要信息 dump 出成两个文档,先期透过 gdb/crash 等辅助工具展开预测和增容。和采用者态流程的 coredump 监督机制类似于。它的主要就业务流程如下表所示图右图:

能看见它的核心理念基本原理是留存几段缓存因此事先读取了两个可供采用的 kernel,在主 kernel 再次出现机械故障时重定向到可供采用 kernel,在可供采用 kernel 中把主 kernel 采用的缓存和出现机械故障时的暂存器重要信息 dump 到两个硬盘文档中供先期预测。那个文档的文件格式是 elf core 文档文件格式。

kdump 主要就却是用以抓取纯应用软件的机械故障,在PDP应用领域还须要加之对硬件机械故障的抓取,仿效其基本原理并展开强化和改建,就能内部结构出他们的 coredump 监督机制。

上面就来详尽的预测整座 kdump 监督机制的详尽基本原理。

加装

以后的 kdump 加装须要纯手工的两个个加装 kexec-tools、kdump-tools、crash,纯手工实用性 grub cmdline 模块。在那时的 ubuntu 中只须要加装两个 linux-crashdump 应用应用软件就手动帮你搞掂:

sudo apt-getinstall linux-crashdump

加装完后,能透过 kdump-config 命令检查控制系统是否实用性正确:

$ kdump-configshowDUMP_MODE:kdumpUSE_KDUMP:1KDUMP_SYSCTL:kernel.panic_on_oops=1KDUMP_COREDIR:/var/crash // kdump 文档的存储目录crashkernel addr: 0x/var/lib/kdump/vmlinuz: symbolic link to /boot/vmlinuz-5.8.18+kdump initrd:/var/lib/kdump/initrd.img: symbolic link to /var/lib/kdump/initrd.img-5.8.18+current state: ready to kdump // 显示 ready 状态,说明控制系统 kdmup 监督机制已经准备就绪kexec command:/sbin/kexec -p --command-line="BOOT_IMAGE=/boot/vmlinuz-5.8.18+ root=UUID=9ee42fe2-4e73-4703-8b6d-bb238ffdb003 ro find_preseed=/preseed.cfg auto noprompt priority=critical locale=en_US quiet reset_devices systemd.unit=kdump-tools-dump.service nr_cpus=1irqpoll nousb ata_piix.prefer_ms_hyperv=0" --initrd=/var/lib/kdump/initrd.img /var/lib/kdump/vmlinuz

linux-crashdump 的本质却是由两个个分离的应用应用软件组成的:

$ sudo apt-getinstalllinux-crashdump -dReadingpackagelists... DoneBuilding dependency treeReading state information... DoneThefollowingadditional packages will be installed:crash efibootmgr grub-common grub-efi-arm64 grub-efi-arm64-bingrub-efi-arm64-signed grub2-common kdump-tools kexec-tools libfreetype6libsnappy1v5 makedumpfile os-proberSuggested packages:multiboot-doc xorriso desktop-baseRecommended packages:secureboot-dbThefollowingNEWpackages will be installed:crash efibootmgr grub-common grub-efi-arm64 grub-efi-arm64-bingrub-efi-arm64-signed grub2-common kdump-tools kexec-tools libfreetype6libsnappy1v5 linux-crashdump makedumpfile os-prober0 upgraded,14newly installed,0toremoveand67notupgraded.Needtoget6611kBofarchives.

触发 kdump

在 kdump 就绪以后我们纯手工触发一次 panic :

$ sudo bashecho c > /proc/sysrq-trigger

在控制系统 kdump 完成,重新启动以后。我们在 /var/crash 目录下能找到 kdump 生成的缓存转存储文档:

$ls-l/var/crash/202107011353/total65324-rw-------1rootwhoopsie119480Jul113:53dmesg.202107011353//控制系统kernellog重要信息-rw-------1rootwhoopsie66766582Jul113:53dump.202107011353//缓存转存储文档,压缩文件格式$sudo file /var/crash/202107011353/dump.202107011353/var/crash/202107011353/dump.202107011353:Kdumpcompresseddumpv6,systemLinux,nodeubuntu,release5.8.18+,version18 SMP Thu Jul 1 11:24:39 CST 2021, machine x86_64, domain (none)

默认生成的 dump 文档是经过 makedumpfile 压缩过的,或者我们修改一些实用性生成原始的 elf core 文档:

$ls-l/var/crash/202107011132/total1785584-rw-------1rootwhoopsie117052Jul111:32dmesg.202107011132//控制系统kernellog重要信息-r-----r--1rootwhoopsie1979371520Jul111:32vmcore.202107011132//缓存转存储文档,原始Elf文件格式$file /var/crash/202107011132/vmcore.202107011132/var/crash/202107011132/vmcore.202107011132:ELF64-bitLSBcorefile,x86-64,version1(SYSV),SVR4-style

增容 kdump

采用 crash 辅助工具能很方便对 kdump 文档展开预测, crash 是对 gdb 展开了一些包装,生成了更多的增容Mach的快捷命令。同样能利用 gdb 和 trace32 辅助工具展开预测

$ sudo crash /usr/lib/debug/boot/vmlinux-5.8.0-43-generic /var/crash/202106170338/dump.202106170338

kdump-tools.service 业务流程预测

在前面我们说过能把 kdump 默认的压缩文件格式改成原生 ELF Core 文档文件格式,本节我们就来实现那个需求。

把/proc/vmcore文档从缓存拷贝到硬盘是 crash kernel 中的 kdump-tools.service 服务完成的,我们来详尽预测一下其中的业务流程:区块链技术企业 https://www.110btc.com/qukuai

  1. 首先从 kdump-config 实用性中能看见,第二份 crash kernel 启动后 systemd 只须要启动两个服务 kdump-tools-dump.service:

kdump-config showDUMP_MODE: kdumpUSE_KDUMP:1KDUMP_SYSCTL: kernel.panic_on_oops=1KDUMP_COREDIR:/var/crashcrashkernel addr:0x73000000/var/lib/kdump/vmlinuz: symbolic link to /boot/vmlinuz-5.8.0-43-generickdump initrd:/var/lib/kdump/initrd.img: symbolic link to /var/lib/kdump/initrd.img-5.8.0-43-genericcurrent state: ready to kdumpkexec command:/sbin/kexec -p --command-line="BOOT_IMAGE=/boot/vmlinuz-5.8.0-43-generic root=UUID=9ee42fe2-4e73-4703-8b6d-bb238ffdb003 ro find_preseed=/preseed.cfg auto noprompt priority=critical locale=en_US quiet reset_devices systemd.unit=kdump-tools-dump.service nr_cpus=1 irqpoll nousb ata_piix.prefer_ms_hyperv=0"--initrd=/var/lib/kdump/initrd.img /var/lib/kdump/vmlinuz
  1. kdump-tools-dump.service 服务本质是调用 kdump-tools start 脚本:

systemctl cat kdump-tools-dump.service/lib/systemd/system/kdump-tools-dump.service[Unit]Description=Kernelcrashdumpcapture serviceWants=network-online.target dbus.socket systemd-resolved.serviceAfter=network-online.target dbus.socket systemd-resolved.service[Service]Type=oneshotStandardOutput=syslog+consoleEnvironmentFile=/etc/default/kdump-toolsExecStart=/etc/init.d/kdump-tools startExecStop=/etc/init.d/kdump-tools stopRemainAfterExit=yes
  1. kdump-tools 调用了 kdump-config savecore:

vim /etc/init.d/kdump-toolsKDUMP_SCRIPT=/usr/sbin/kdump-configecho -n"Starting$DESC: "$KDUMP_SCRIPTsavecore
  1. kdump-config 调用了 makedumpfile -c -d 31 /proc/vmcore dump.xxxxxx:

MAKEDUMP_ARGS=${MAKEDUMP_ARGS:="-c -d 31"}vmcore_file=/proc/vmcoremakedumpfile$MAKEDUMP_ARGS$vmcore_file$KDUMP_CORETEMP

kdump-tools-dump.service 默认调用 makedumpfile 生成压缩的 dump 文档。但是我们想预测原始的 elf 文件格式的 vmcore 文档,怎么办?

4.1) 首先我们修改 /usr/sbin/kdump-config 文档中的 MAKEDUMP_ARGS 模块让其出错。

MAKEDUMP_ARGS=${MAKEDUMP_ARGS:="-xxxxx -c -d 31"}// 其中 -xxxxx 是随便加的选项

4.2) 然后 kdump-config 就会调用 cp /proc/vmcore vmcore.xxxxxx 命令来生成原始 elf 文件格式的 vmcore 文档

log_action_msg"running makedumpfile$MAKEDUMP_ARGS$vmcore_file$KDUMP_CORETEMP"makedumpfile$MAKEDUMP_ARGS$vmcore_file$KDUMP_CORETEMP// 先调用 makedumpfile 生成压缩文件格式的 dump 文档ERROR=$?if [$ERROR-ne0] ;then// 如果 makedumpfile 调用失败log_failure_msg"$NAME: makedumpfile failed, falling back to cp"logger -t$NAME"makedumpfile failed, falling back to cp"KDUMP_CORETEMP="$KDUMP_STAMPDIR/vmcore-incomplete"KDUMP_COREFILE="$KDUMP_STAMPDIR/vmcore.$KDUMP_STAMP"cp$vmcore_file$KDUMP_CORETEMP// 再尝试采用 cp 拷贝原始的 vmcore elf 文档ERROR=$?fi

基本原理预测

kexec 实现了crash kernel 的读取。核心理念分为两部分:

  • kexec_file_load/kexec_load 负责在起始时就把备份的 kernel 和 initrd 读取好到缓存。

  • __crash_kexec 负责在机械故障时重定向到备份 kernel 中。

kdump 主要就实现把 vmcore 文档从缓存拷贝到硬盘,并展开一些瘦身。

本次并不打算对 kexec 读取Mach和地址转换业务流程以及 kdump 的拷贝裁剪展开详尽的解析,我们只关注其中的两个重要文档 /proc/kcore 和 /proc/vmcore。其中:

  • /proc/kcore 是在 normal kernel 中把 normal kernel 的缓存模拟成两个 elf core 文档,能采用gdb 对当前控制系统展开在线增容,因为是他们增容他们会存在一些限制。

  • /proc/vmcore 是在 crash kernel 中把 normal kernel 的缓存模拟成两个 elf core 文档,因为这时 normal kernel 已经停止运行,所以能无限制的展开增容。我们 kdump 最后得到的 dump 文档,就是把 /proc/vmcore 文档从缓存简单拷贝到了硬盘,或者再加之点裁剪和压缩。区块链技术企业 https://www.110btc.com/qukuai

所以能看见 /proc/kcore 和 /proc/vmcore 这两个文档是整座监督机制的核心理念,我们重点预测这两部分的实现。

elf core 文档文件格式

关于 ELF 文档文件格式,我们熟知它有三种文件格式 .o文档(ET_REL)、.so文档(ET_EXEC)、exe文档(ET_DYN)。但是关于它的第四种文件格式 core文档(ET_CORE) 一直很神秘,也很神奇 gdb 一增容就能恢复到机械故障现场。

白名单是什么意思

以下是 elf core 文档的大致文件格式:

能看见 elf core 文档只关注运行时状态,所以它只有 segment 重要信息,没有 section 重要信息。其主要就包含两种类型的 segment 重要信息:

  1. PT_LOAD。每个 segemnt 用以记录几段 memory 区域,还记录了这段 memory 对应的物理地址、虚拟地址和长度。

  2. PT_NOTE。那个是 elf core 中新增的 segment,记录了解析 memory 区域的关键重要信息。PT_NOTE segment 被分成了多个 elf_note结构,其中 NT_PRSTATUS 类型的记录了复位前 CPU 的暂存器重要信息,NT_TASKSTRUCT 记录了进程的 task_struct 重要信息,还有两个最关键0类型的自定义 VMCOREINFO 结论记录了Mach的一些关键重要信息。

elf core 文档的大部分内容用 PT_LOAD segemnt 来记录 memeory 重要信息,但是怎么利用这些缓存重要信息的钥匙记录在PT_NOTE segemnt 当中。

www up222 com

我们来看两个具体 vmcore 文档的例子:

  1. 首先我们查询 elf header 重要信息:

$ sudo readelf -e vmcore.202107011132ELF Header:Magic: 7f 45 4c 46 02 01 01 00 00 00 00 00 00 00 00 00Class: ELF64Data: 2s complement, little endianVersion: 1 (current)OS/ABI: UNIX - System VABI Version: 0Type: CORE (Core file) // 能看见文档类型是 ET_COREMachine: Advanced Micro Devices X86-64Version: 0x1Entry point address: 0x0Start of program headers: 64 (bytes into file)Startofsectionheaders:0(bytesintofile)Flags:0x0Sizeofthis header:64(bytes)Sizeofprogram headers:56(bytes)Numberofprogram headers:6Sizeofsectionheaders:0(bytes)Numberofsectionheaders:0Sectionheaderstringtableindex:0Therearenosectionsinthis file.// 能看见包含了 PT_NOTE 和 PT_LOAD 两种类型的 segmentProgram Headers:TypeOffsetVirtAddr PhysAddrFileSiz MemSiz Flags AlignNOTE0x00000000000010000x00000000000000000x00000000000000000x00000000000013180x00000000000013180x0LOAD0x00000000000030000xffffffffb72000000x0000000006c000000x000000000202c0000x000000000202c000RWE0x0LOAD0x000000000202f0000xffff903a000010000x00000000000010000x000000000009d8000x000000000009d800RWE0x0LOAD0x00000000020cd0000xffff903a001000000x00000000001000000x0000000072f000000x0000000072f00000RWE0x0LOAD0x0000000074fcd0000xffff903a7f0000000x000000007f0000000x0000000000ee00000x0000000000ee0000RWE0x0LOAD0x0000000075ead0000xffff903a7ff000000x000000007ff000000x00000000001000000x0000000000100000RWE0x0
  1. 能进一步查看 PT_NOTE 存储的具体内容:

$sudoreadelf-nvmcore.202107011132Displayingnotesfoundatfileoffset0x00001000withlength0x00001318:OwnerDatasizeDescriptionCORE0x00000150NT_PRSTATUS(prstatusstructure)//因为控制系统有8个CPU,所以保存了8份prstatus重要信息CORE0x00000150NT_PRSTATUS(prstatusstructure)CORE0x00000150NT_PRSTATUS(prstatusstructure)CORE0x00000150NT_PRSTATUS(prstatusstructure)CORE0x00000150NT_PRSTATUS(prstatusstructure)CORE0x00000150NT_PRSTATUS(prstatusstructure)CORE0x00000150NT_PRSTATUS(prstatusstructure)CORE0x00000150NT_PRSTATUS(prstatusstructure)VMCOREINFO0x000007dd Unknown note type:(0x00000000)//自定义的VMCOREINFO重要信息descriptiondata:4f5352454c454153453d352e382e31382b0a5041474553495a453d343039360a53594d424f4c28696e69745f7574735f6e73293d6666666666666666...
  1. 能进一步解析 VMCOREINFO 存储的重要信息,description data 后面是几段 16 进制的码流转换以后得到:

OSRELEASE=5.8.0-43-genericPAGESIZE=4096SYMBOL(init_uts_ns)=ffffffffa5014620SYMBOL(node_online_map)=ffffffffa5276720SYMBOL(swapper_pg_dir)=ffffffffa500a000SYMBOL(_stext)=ffffffffa3a00000SYMBOL(vmap_area_list)=ffffffffa50f2560SYMBOL(mem_section)=ffff91673ffd2000LENGTH(mem_section)=2048SIZE(mem_section)=16OFFSET(mem_section.section_mem_map)=0SIZE(page)=64SIZE(pglist_data)=171968SIZE(zone)=1472SIZE(free_area)=88SIZE(list_head)=16SIZE(nodemask_t)=128OFFSET(page.flags)=0OFFSET(page._refcount)=52OFFSET(page.mapping)=24OFFSET(page.lru)=8OFFSET(page._mapcount)=48OFFSET(page.private)=40OFFSET(page.compound_dtor)=16OFFSET(page.compound_order)=17OFFSET(page.compound_head)=8OFFSET(pglist_data.node_zones)=0OFFSET(pglist_data.nr_zones)=171232OFFSET(pglist_data.node_start_pfn)=171240OFFSET(pglist_data.node_spanned_pages)=171256OFFSET(pglist_data.node_id)=171264OFFSET(zone.free_area)=192OFFSET(zone.vm_stat)=1280OFFSET(zone.spanned_pages)=120OFFSET(free_area.free_list)=0OFFSET(list_head.next)=0OFFSET(list_head.prev)=8OFFSET(vmap_area.va_start)=0OFFSET(vmap_area.list)=40LENGTH(zone.free_area)=11SYMBOL(log_buf)=ffffffffa506a6e0SYMBOL(log_buf_len)=ffffffffa506a6dcSYMBOL(log_first_idx)=ffffffffa55f55d8SYMBOL(clear_idx)=ffffffffa55f55a4SYMBOL(log_next_idx)=ffffffffa55f55c8SIZE(printk_log)=16OFFSET(printk_log.ts_nsec)=0OFFSET(printk_log.len)=8OFFSET(printk_log.text_len)=10OFFSET(printk_log.dict_len)=12LENGTH(free_area.free_list)=5NUMBER(NR_FREE_PAGES)=0NUMBER(PG_lru)=4NUMBER(PG_private)=13NUMBER(PG_swapcache)=10NUMBER(PG_swapbacked)=19NUMBER(PG_slab)=9NUMBER(PG_hwpoison)=23NUMBER(PG_head_mask)=65536NUMBER(PAGE_BUDDY_MAPCOUNT_VALUE)=-129NUMBER(HUGETLB_PAGE_DTOR)=2NUMBER(PAGE_OFFLINE_MAPCOUNT_VALUE)=-257NUMBER(phys_base)=1073741824SYMBOL(init_top_pgt)=ffffffffa500a000NUMBER(pgtable_l5_enabled)=0SYMBOL(node_data)=ffffffffa5271da0LENGTH(node_data)=1024KERNELOFFSET=22a00000NUMBER(KERNEL_IMAGE_SIZE)=1073741824NUMBER(sme_mask)=0CRASHTIME=1623937823

/proc/kcore

有人在清理硬盘空间的常常会碰到 /proc/kcore 文档,因为它显示出的体积非常的大,有时高达128T。但是实际上她没有占用任何硬盘空间,它是两个缓存文档控制系统中的文档。它也没有占用多少缓存空间,除了一些控制头部分占用少量缓存,大块的空间都是模拟的,只有在采用者读操作的这时候才会从对应的缓存空间去读取的。

上一节已经介绍了 /proc/kcore 是把当前控制系统的缓存模拟成两个 elf core 文档,能采用gdb 对当前控制系统展开在线增容。那本节我们就来看看具体的模拟过程。区块链技术企业 https://www.110btc.com/qukuai

准备数据

初始化就是构建 kclist_head 链表的两个过程,链表中每两个成员对应两个 PT_LOAD segment。在读操作的这时候再用 elf 的 PT_LOAD segment 呈现这些成员。

staticint__initproc_kcore_init(void){/* (1) 创建 /proc/kcore 文档 */proc_root_kcore = proc_create("kcore", S_IRUSR, , &kcore_proc_ops);if(!proc_root_kcore) {pr_err("couldnt create /proc/kcore\n");return0;/* Always returns 0. */}/* Store text area if its special *//* (2) 将Mach代码段 _text 加入kclist_head链表,kclist_head链表中每两个成员对应两个 PT_LOAD segment */proc_kcore_text_init;/* Store vmalloc area *//* (3) 将 VMALLOC 段缓存加入kclist_head链表 */kclist_add(&kcore_vmalloc, (void*)VMALLOC_START,VMALLOC_END - VMALLOC_START, KCORE_VMALLOC);/* (4) 将 MODULES_VADDR 模块缓存加入kclist_head链表 */add_modules_range;/* Store direct-map area from physical memory map *//* (5) 遍历控制系统缓存布局表,将有效缓存加入kclist_head链表 */kcore_update_ram;register_hotmemory_notifier(&kcore_callback_nb);return0;}staticintkcore_update_ram(void){LIST_HEAD(list);LIST_HEAD(garbage);intnphdr;size_tphdrs_len, notes_len, data_offset;structkcore_list*tmp, *pos;intret =0;down_write(&kclist_lock);if(!xchg(&kcore_need_update,0))gotoout;/* (5.1) 遍历控制系统缓存布局表,将符合`IORESOURCE_SYSTEM_RAM | IORESOURCE_BUSY`缓存加入list链表 */ret = kcore_ram_list(&list);if(ret) {/* Couldnt get the RAM list, try again next time. */WRITE_ONCE(kcore_need_update,1);list_splice_tail(&list, &garbage);gotoout;}/* (5.2) 删除掉原有 kclist_head 链表中的 KCORE_RAM/KCORE_VMEMMAP 区域,因为全局链表中已经覆盖了 */list_for_each_entry_safe(pos, tmp, &kclist_head,list) {if(pos->type == KCORE_RAM || pos->type == KCORE_VMEMMAP)list_move(&pos->list, &garbage);}/* (5.3) 将原有 kclist_head 链表 和全局链表 list 拼接到一起 */list_splice_tail(&list, &kclist_head);/* (5.4) 更新 kclist_head 链表的成员个数,两个成员代表两个 PT_LOAD segment。计算 PT_NOTE segment 的长度计算 `/proc/kcore` 文档的长度,那个长度是个虚值,最大是虚拟地址的最大范围*/proc_root_kcore->size = get_kcore_size(&nphdr, &phdrs_len, ¬es_len,&data_offset);out:up_write(&kclist_lock);/* (5.5) 释放掉上面删除的链表成员占用的空间 */list_for_each_entry_safe(pos, tmp, &garbage,list) {list_del(&pos->list);kfree(pos);}returnret;}

其中的两个关键从遍历控制系统缓存布局表,关键代码如下表所示:

kcore_ram_list → walk_system_ram_range:intwalk_system_ram_range(unsignedlongstart_pfn,unsignedlongnr_pages,void*arg,int(*func)(unsignedlong,unsignedlong,void*)){resource_size_tstart, end;unsignedlongflags;structresourceres;unsignedlongpfn, end_pfn;intret = -EINVAL;start = (u64) start_pfn << PAGE_SHIFT;end = ((u64)(start_pfn + nr_pages) << PAGE_SHIFT) -1;/* (5.1.1) 从 iomem_resource 链表中查找符合 IORESOURCE_SYSTEM_RAM | IORESOURCE_BUSY 的资源段 */flags = IORESOURCE_SYSTEM_RAM | IORESOURCE_BUSY;while(start < end &&!find_next_iomem_res(start, end, flags, IORES_DESC_NONE,false, &res)) {pfn = PFN_UP(res.start);end_pfn = PFN_DOWN(res.end +1);if(end_pfn > pfn)ret = (*func)(pfn, end_pfn - pfn, arg);if(ret)break;start = res.end +1;}returnret;}

其实就相当于以下命令:

$ sudo cat /proc/iomem |grep"System RAM"00001000-0009e7ff : System RAM00100000-7fedffff : System RAM7ff00000-7fffffff : System RAM

读取 elf core

准备好数据以后,却是在读 /proc/kcore 文档时,以 elf core 的文件格式呈现。

staticconststructproc_opskcore_proc_ops= {.proc_read = read_kcore,.proc_open = open_kcore,.proc_release = release_kcore,.proc_lseek = default_llseek,};↓staticssize_tread_kcore(struct file *file,char__user *buffer,size_tbuflen,loff_t*fpos){char*buf = file->private_data;size_tphdrs_offset, notes_offset, data_offset;size_tphdrs_len, notes_len;structkcore_list*m;size_ttsz;intnphdr;unsignedlongstart;size_torig_buflen = buflen;intret =0;down_read(&kclist_lock);/* (1) 获取到PT_LOAD segment个数、PT_NOTE segment 的长度等重要信息,开始动态内部结构 elf core 文档了 */get_kcore_size(&nphdr, &phdrs_len, ¬es_len, &data_offset);phdrs_offset =sizeof(struct elfhdr);notes_offset = phdrs_offset + phdrs_len;/* ELF file header. *//* (2) 内部结构 ELF 文档头,并拷贝给给采用者态读缓存 */if(buflen && *fpos <sizeof(struct elfhdr)) {structelfhdrehdr= {.e_ident = {[EI_MAG0] = ELFMAG0,[EI_MAG1] = ELFMAG1,[EI_MAG2] = ELFMAG2,[EI_MAG3] = ELFMAG3,[EI_CLASS] = ELF_CLASS,[EI_DATA] = ELF_DATA,[EI_VERSION] = EV_CURRENT,[EI_OSABI] = ELF_OSABI,},.e_type = ET_CORE,.e_machine = ELF_ARCH,.e_version = EV_CURRENT,.e_phoff =sizeof(struct elfhdr),.e_flags = ELF_CORE_EFLAGS,.e_ehsize =sizeof(struct elfhdr),.e_phentsize =sizeof(struct elf_phdr),.e_phnum = nphdr,};tsz =min_t(size_t, buflen,sizeof(struct elfhdr) - *fpos);if(copy_to_user(buffer, (char*)&ehdr + *fpos, tsz)) {ret = -EFAULT;gotoout;}buffer += tsz;buflen -= tsz;*fpos += tsz;}/* ELF program headers. *//* (3) 内部结构 ELF program 头,并拷贝给给采用者态读缓存 */if(buflen && *fpos < phdrs_offset + phdrs_len) {structelf_phdr*phdrs, *phdr;phdrs = kzalloc(phdrs_len, GFP_KERNEL);if(!phdrs) {ret = -ENOMEM;gotoout;}/* (3.1) PT_NOTE segment 不须要物理地址和虚拟地址 */phdrs[0].p_type = PT_NOTE;phdrs[0].p_offset = notes_offset;phdrs[0].p_filesz = notes_len;phdr = &phdrs[1];/* (3.2) 逐个计算 PT_LOAD segment 的物理地址、虚拟地址和长度 */list_for_each_entry(m, &kclist_head,list) {phdr->p_type = PT_LOAD;phdr->p_flags = PF_R | PF_W | PF_X;phdr->p_offset = kc_vaddr_to_offset(m->addr) + data_offset;if(m->type == KCORE_REMAP)phdr->p_vaddr = (size_t)m->vaddr;elsephdr->p_vaddr = (size_t)m->addr;if(m->type == KCORE_RAM || m->type == KCORE_REMAP)phdr->p_paddr = __pa(m->addr);elseif(m->type == KCORE_TEXT)phdr->p_paddr = __pa_symbol(m->addr);elsephdr->p_paddr = (elf_addr_t)-1;phdr->p_filesz = phdr->p_memsz = m->size;phdr->p_align = PAGE_SIZE;phdr++;}tsz =min_t(size_t, buflen, phdrs_offset + phdrs_len - *fpos);if(copy_to_user(buffer, (char*)phdrs + *fpos - phdrs_offset,tsz)) {kfree(phdrs);ret = -EFAULT;gotoout;}kfree(phdrs);buffer += tsz;buflen -= tsz;*fpos += tsz;}/* ELF note segment. *//* (4) 内部结构 PT_NOTE segment,并拷贝给给采用者态读缓存 */if(buflen && *fpos < notes_offset + notes_len) {structelf_prstatusprstatus= {};structelf_prpsinfoprpsinfo= {.pr_sname =R,.pr_fname ="vmlinux",};char*notes;size_ti =0;strlcpy(prpsinfo.pr_psargs, saved_command_line,sizeof(prpsinfo.pr_psargs));notes = kzalloc(notes_len, GFP_KERNEL);if(!notes) {ret = -ENOMEM;gotoout;}/* (4.1) 添加 NT_PRSTATUS */append_kcore_note(notes, &i, CORE_STR, NT_PRSTATUS, &prstatus,sizeof(prstatus));/* (4.2) 添加 NT_PRPSINFO */append_kcore_note(notes, &i, CORE_STR, NT_PRPSINFO, &prpsinfo,sizeof(prpsinfo));/* (4.3) 添加 NT_TASKSTRUCT */append_kcore_note(notes, &i, CORE_STR, NT_TASKSTRUCT, current,arch_task_struct_size);/** vmcoreinfo_size is mostly constant after init time, but it* can be changed by crash_save_vmcoreinfo. Racing here with a* panic on another CPU before the machine goes down is insanely* unlikely, but its better to not leave potential buffer* overflows lying around, regardless.* Vmcoreinfo_size在初始化后基本保持不变,但能透过crash_save_vmcoreinfo修改。在机器宕机以后,在另两个CPU上再次出现恐慌是不太可能的,但无论如何,最好不要让潜在的缓冲区溢出到处存在。*//* (4.4) 添加 VMCOREINFO */append_kcore_note(notes, &i, VMCOREINFO_NOTE_NAME,0,vmcoreinfo_data,min(vmcoreinfo_size, notes_len - i));tsz =min_t(size_t, buflen, notes_offset + notes_len - *fpos);if(copy_to_user(buffer, notes + *fpos - notes_offset, tsz)) {kfree(notes);ret = -EFAULT;gotoout;}kfree(notes);buffer += tsz;buflen -= tsz;*fpos += tsz;}/** Check to see if our file offset matches with any of* the addresses in the elf_phdr on our list.*/start = kc_offset_to_vaddr(*fpos - data_offset);if((tsz = (PAGE_SIZE - (start & ~PAGE_MASK))) > buflen)tsz = buflen;m = ;/* (5) 内部结构 PT_LOAD segment,并拷贝给给采用者态读缓存 */while(buflen) {/** If this is the first iteration or the address is not within* the previous entry, search for a matching entry.*/if(!m || start < m->addr || start >= m->addr + m->size) {list_for_each_entry(m, &kclist_head,list) {if(start >= m->addr &&start < m->addr + m->size)break;}}if(&m->list== &kclist_head) {if(clear_user(buffer, tsz)) {ret = -EFAULT;gotoout;}m = ;/* skip the list anchor */}elseif(!pfn_is_ram(__pa(start) >> PAGE_SHIFT)) {if(clear_user(buffer, tsz)) {ret = -EFAULT;gotoout;}}elseif(m->type == KCORE_VMALLOC) {vread(buf, (char*)start, tsz);/* we have to zero-fill user buffer even if no read */if(copy_to_user(buffer, buf, tsz)) {ret = -EFAULT;gotoout;}}elseif(m->type == KCORE_USER) {/* User page is handled prior to normal kernel page: */if(copy_to_user(buffer, (char*)start, tsz)) {ret = -EFAULT;gotoout;}}else{if(kern_addr_valid(start)) {/** Using bounce buffer to bypass the* hardened user copy kernel text checks.*/if(copy_from_kernel_nofault(buf, (void*)start,tsz)) {if(clear_user(buffer, tsz)) {ret = -EFAULT;gotoout;}}else{if(copy_to_user(buffer, buf, tsz)) {ret = -EFAULT;gotoout;}}}else{if(clear_user(buffer, tsz)) {ret = -EFAULT;gotoout;}}}buflen -= tsz;*fpos += tsz;buffer += tsz;start += tsz;tsz = (buflen > PAGE_SIZE ? PAGE_SIZE : buflen);}out:up_read(&kclist_lock);if(ret)returnret;returnorig_buflen - buflen;}

/proc/vmcore

/proc/vmcore 是在 crash kernel 中把 normal kernel 的缓存模拟成两个 elf core 文档。

它的文档文件格式内部结构是和上一节的 /proc/kcore 是类似于的,不同的是它的数据准备工作是分成两部分完成的:

  • normal kernel 负责事先把 elf header 准备好。

  • crash kernel 负责把传递过来的 elf header 封装成 /proc/vmcore 文档,因此保存到硬盘。

上面我们就来详尽预测具体的过程。

准备 elf header (运行在 normal kernel)

在控制系统出现机械故障时状态是很不稳定的,时间也是很紧急的,所以我们在 normal kernel 中就尽可能早的把 /proc/vomcore 文档的 elf header 数据准备好。虽然 normal kernel 不会呈现 /proc/vmcore,只会在 crash kernel 中呈现。

在 kexec_tools 采用 kexec_file_load 控制系统调用读取 crash kernel 时,就顺带把 /proc/vmcore 的 elf header 须要的大部分数据准备好了:

kexec_file_load → kimage_file_alloc_init → kimage_file_prepare_segments → arch_kexec_kernel_image_load → image->fops->load → kexec_bzImage64_ops.load → bzImage64_load → crash_load_segments → prepare_elf_headers → crash_prepare_elf64_headers:staticintprepare_elf_headers(structkimage*image, void **addr,unsigned long *sz){structcrash_mem*cmem;int ret;/* (1) 遍历控制系统缓存布局表统计有效缓存区域的个数,根据个数分配 cmem 空间 */cmem = fill_up_crash_elf_data;if(!cmem)return-ENOMEM;/* (2) 再次遍历控制系统缓存布局表统计有效缓存区域,记录到 cmem 空间 */ret = walk_system_ram_res(0, -1, cmem, prepare_elf64_ram_headers_callback);if(ret)goto out;/* Exclude unwanted mem ranges *//* (3) 排除掉一些不会采用的缓存区域 */ret = elf_header_exclude_ranges(cmem);if(ret)goto out;/* By default prepare 64bit headers *//* (4) 开始内部结构 elf header */ret = crash_prepare_elf64_headers(cmem, IS_ENABLED(CONFIG_X86_64), addr, sz);out:vfree(cmem);returnret;}intcrash_prepare_elf64_headers(structcrash_mem*mem, int kernel_map,void **addr, unsigned long *sz){Elf64_Ehdr *ehdr;Elf64_Phdr *phdr;unsigned long nr_cpus = num_possible_cpus, nr_phdr, elf_sz;unsignedchar*buf;unsigned int cpu, i;unsigned long long notes_addr;unsigned long mstart, mend;/* extra phdr for vmcoreinfo elf note */nr_phdr = nr_cpus +1;nr_phdr += mem->nr_ranges;/** kexec-tools creates an extra PT_LOAD phdr for kernel text mapping* area (for example, ffffffff80000000 - ffffffffa0000000 on x86_64).* I think this is required by tools like gdb. So same physical* memory will be mapped in two elf headers. One will contain kernel* text virtual addresses and other will have __va(physical) addresses.*/nr_phdr++;elf_sz = sizeof(Elf64_Ehdr) + nr_phdr * sizeof(Elf64_Phdr);elf_sz = ALIGN(elf_sz, ELF_CORE_HEADER_ALIGN);buf = vzalloc(elf_sz);if(!buf)return-ENOMEM;/* (4.1) 内部结构 ELF 文档头 */ehdr = (Elf64_Ehdr *)buf;phdr = (Elf64_Phdr *)(ehdr +1);memcpy(ehdr->e_ident, ELFMAG, SELFMAG);ehdr->e_ident[EI_CLASS] = ELFCLASS64;ehdr->e_ident[EI_DATA] = ELFDATA2LSB;ehdr->e_ident[EI_VERSION] = EV_CURRENT;ehdr->e_ident[EI_OSABI] = ELF_OSABI;memset(ehdr->e_ident + EI_PAD,0, EI_NIDENT - EI_PAD);ehdr->e_type = ET_CORE;ehdr->e_machine = ELF_ARCH;ehdr->e_version = EV_CURRENT;ehdr->e_phoff = sizeof(Elf64_Ehdr);ehdr->e_ehsize = sizeof(Elf64_Ehdr);ehdr->e_phentsize = sizeof(Elf64_Phdr);/* Prepare one phdr of type PT_NOTE for each present cpu *//* (4.2) 内部结构 ELF program 头,每个 cpu 独立内部结构两个 PT_LOAD segmentsegment 的数据存放在 per_cpu_ptr(crash_notes, cpu) 变量当中注意 crash_notes 中目前还没有数据,当前只是记录了物理地址。只有在 crash 出现以后,才会实际往里面存储数据*/for_each_present_cpu(cpu) {phdr->p_type = PT_NOTE;notes_addr = per_cpu_ptr_to_phys(per_cpu_ptr(crash_notes, cpu));phdr->p_offset = phdr->p_paddr = notes_addr;phdr->p_filesz = phdr->p_memsz = sizeof(note_buf_t);(ehdr->e_phnum)++;phdr++;}/* Prepare one PT_NOTE header for vmcoreinfo *//* (4.3) 内部结构 ELF program 头,VMCOREINFO 独立内部结构两个 PT_LOAD segment注意当前只是记录了 vmcoreinfo_note 的物理地址,实际数据也是分几部分更新的*/phdr->p_type = PT_NOTE;phdr->p_offset = phdr->p_paddr = paddr_vmcoreinfo_note;phdr->p_filesz = phdr->p_memsz = VMCOREINFO_NOTE_SIZE;(ehdr->e_phnum)++;phdr++;/* Prepare PT_LOAD type program header for kernel text region *//* (4.4) 内部结构 ELF program 头,Mach代码段对应的 PT_LOAD segment */if(kernel_map) {phdr->p_type = PT_LOAD;phdr->p_flags = PF_R|PF_W|PF_X;phdr->p_vaddr = (unsigned long) _text;phdr->p_filesz = phdr->p_memsz = _end - _text;phdr->p_offset = phdr->p_paddr = __pa_symbol(_text);ehdr->e_phnum++;phdr++;}/* Go through all the ranges in mem->ranges and prepare phdr *//* (4.5) 遍历 cmem,把控制系统中的有效缓存创建成 PT_LOAD segment */for(i =0; i < mem->nr_ranges; i++) {mstart = mem->ranges[i].start;mend = mem->ranges[i].end;phdr->p_type = PT_LOAD;phdr->p_flags = PF_R|PF_W|PF_X;phdr->p_offset = mstart;phdr->p_paddr = mstart;phdr->p_vaddr = (unsigned long) __va(mstart);phdr->p_filesz = phdr->p_memsz = mend - mstart +1;phdr->p_align =0;ehdr->e_phnum++;phdr++;pr_debug("Crash PT_LOAD elf header. phdr=%p vaddr=0x%llx, paddr=0x%llx, sz=0x%llx e_phnum=%d p_offset=0x%llx\n",phdr, phdr->p_vaddr, phdr->p_paddr, phdr->p_filesz,ehdr->e_phnum, phdr->p_offset);}*addr = buf;*sz = elf_sz;return0;}

1. crash_notes 数据的更新

只有在出现 panic 以后,才会往 crash_notes 中保存实际的 CPU 暂存器数据。其更新过程如下表所示:

__crash_kexec → machine_crash_shutdown → crash_save_cpu:ipi_cpu_crash_stop → crash_save_cpu:voidcrash_save_cpu(structpt_regs*regs, int cpu){structelf_prstatusprstatus;u32*buf;if((cpu <0) || (cpu >= nr_cpu_ids))return;/* Using ELF notes here is opportunistic.* I need a well defined structure format* for the data I pass, and I need tags* on the data to indicate what information I have* squirrelled away. ELF notes happen to provide* all of that, so there is no need to invent something new.*/buf = (u32*)per_cpu_ptr(crash_notes, cpu);if(!buf)return;/* (1) 清零 */memset(&prstatus,0, sizeof(prstatus));/* (2) 保存 pid */prstatus.pr_pid = current->pid;/* (3) 保存 暂存器 */elf_core_copy_kernel_regs(&prstatus.pr_reg, regs);/* (4) 以 elf_note 文件格式存储到 crash_notes 中 */buf = append_elf_note(buf, KEXEC_CORE_NOTE_NAME, NT_PRSTATUS,&prstatus, sizeof(prstatus));/* (5) 追加两个全零的 elf_note 当作结尾 */final_note(buf);}

2. vmcoreinfo_note 数据的更新

vmcoreinfo_note 分成两部分来更新:

  • 2.1 第一部分在控制系统初始化的这时候准备好了大部分的数据:

staticint__initcrash_save_vmcoreinfo_init(void){/* (1.1) 分配 vmcoreinfo_data 空间 */vmcoreinfo_data = (unsignedchar*)get_zeroed_page(GFP_KERNEL);if(!vmcoreinfo_data) {pr_warn("Memory allocation for vmcoreinfo_data failed\n");return-ENOMEM;}/* (1.2) 分配 vmcoreinfo_note 空间 */vmcoreinfo_note = alloc_pages_exact(VMCOREINFO_NOTE_SIZE,GFP_KERNEL | __GFP_ZERO);if(!vmcoreinfo_note) {free_page((unsignedlong)vmcoreinfo_data);vmcoreinfo_data = ;pr_warn("Memory allocation for vmcoreinfo_note failed\n");return-ENOMEM;}/* (2.1) 把控制系统的各种关键重要信息采用 VMCOREINFO_xxx 一系列宏,以字符串的形式保持到 vmcoreinfo_data */VMCOREINFO_OSRELEASE(init_uts_ns.name.release);VMCOREINFO_PAGESIZE(PAGE_SIZE);VMCOREINFO_SYMBOL(init_uts_ns);VMCOREINFO_SYMBOL(node_online_map);ifdefCONFIG_MMUVMCOREINFO_SYMBOL_ARRAY(swapper_pg_dir);endifVMCOREINFO_SYMBOL(_stext);VMCOREINFO_SYMBOL(vmap_area_list);ifndefCONFIG_NEED_MULTIPLE_NODESVMCOREINFO_SYMBOL(mem_map);VMCOREINFO_SYMBOL(contig_page_data);endififdefCONFIG_SPARSEMEMVMCOREINFO_SYMBOL_ARRAY(mem_section);VMCOREINFO_LENGTH(mem_section, NR_SECTION_ROOTS);VMCOREINFO_STRUCT_SIZE(mem_section);VMCOREINFO_OFFSET(mem_section, section_mem_map);endifVMCOREINFO_STRUCT_SIZE(page);VMCOREINFO_STRUCT_SIZE(pglist_data);VMCOREINFO_STRUCT_SIZE(zone);VMCOREINFO_STRUCT_SIZE(free_area);VMCOREINFO_STRUCT_SIZE(list_head);VMCOREINFO_SIZE(nodemask_t);VMCOREINFO_OFFSET(page, flags);VMCOREINFO_OFFSET(page, _refcount);VMCOREINFO_OFFSET(page, mapping);VMCOREINFO_OFFSET(page, lru);VMCOREINFO_OFFSET(page, _mapcount);VMCOREINFO_OFFSET(page, private);VMCOREINFO_OFFSET(page, compound_dtor);VMCOREINFO_OFFSET(page, compound_order);VMCOREINFO_OFFSET(page, compound_head);VMCOREINFO_OFFSET(pglist_data, node_zones);VMCOREINFO_OFFSET(pglist_data, nr_zones);ifdefCONFIG_FLAT_NODE_MEM_MAPVMCOREINFO_OFFSET(pglist_data, node_mem_map);endifVMCOREINFO_OFFSET(pglist_data, node_start_pfn);VMCOREINFO_OFFSET(pglist_data, node_spanned_pages);VMCOREINFO_OFFSET(pglist_data, node_id);VMCOREINFO_OFFSET(zone, free_area);VMCOREINFO_OFFSET(zone, vm_stat);VMCOREINFO_OFFSET(zone, spanned_pages);VMCOREINFO_OFFSET(free_area, free_list);VMCOREINFO_OFFSET(list_head, next);VMCOREINFO_OFFSET(list_head, prev);VMCOREINFO_OFFSET(vmap_area, va_start);VMCOREINFO_OFFSET(vmap_area, list);VMCOREINFO_LENGTH(zone.free_area, MAX_ORDER);log_buf_vmcoreinfo_setup;VMCOREINFO_LENGTH(free_area.free_list, MIGRATE_TYPES);VMCOREINFO_NUMBER(NR_FREE_PAGES);VMCOREINFO_NUMBER(PG_lru);VMCOREINFO_NUMBER(PG_private);VMCOREINFO_NUMBER(PG_swapcache);VMCOREINFO_NUMBER(PG_swapbacked);VMCOREINFO_NUMBER(PG_slab);ifdefCONFIG_MEMORY_FAILUREVMCOREINFO_NUMBER(PG_hwpoison);endifVMCOREINFO_NUMBER(PG_head_mask);definePAGE_BUDDY_MAPCOUNT_VALUE (~PG_buddy)VMCOREINFO_NUMBER(PAGE_BUDDY_MAPCOUNT_VALUE);ifdefCONFIG_HUGETLB_PAGEVMCOREINFO_NUMBER(HUGETLB_PAGE_DTOR);definePAGE_OFFLINE_MAPCOUNT_VALUE (~PG_offline)VMCOREINFO_NUMBER(PAGE_OFFLINE_MAPCOUNT_VALUE);endif/* (2.2) 补充一些架构相关的 vmcoreinfo */arch_crash_save_vmcoreinfo;/* (3) 把 vmcoreinfo_data 中保存的数据以 elf_note 的形式保存到 vmcoreinfo_note 中 */update_vmcoreinfo_note;return 0;}
  • 2.2 第二部分在 panic 出现后追加了数据:

__crash_kexec → crash_save_vmcoreinfo:voidcrash_save_vmcoreinfo(void){if(!vmcoreinfo_note)return;/* Use the safe copy to generate vmcoreinfo note if have */if(vmcoreinfo_data_safecopy)vmcoreinfo_data = vmcoreinfo_data_safecopy;/* (1) 补充 "CRASHTIME=xxx" 重要信息 */vmcoreinfo_append_str("CRASHTIME=%lld\n", ktime_get_real_seconds);update_vmcoreinfo_note;}

vmcoreinfo 对应 readelf -n xxx 读出的数据:

$readelf-nvmcore.202106170650Displayingnotesfoundatfileoffset0x00001000withlength0x00000ac8:OwnerDatasizeDescriptionCORE0x00000150NT_PRSTATUS(prstatusstructure)CORE0x00000150NT_PRSTATUS(prstatusstructure)VMCOREINFO0x000007e6 Unknown note type:(0x00000000)descriptiondata:4f5352454c454153453d352e382e30//descriptiondata对应ascii:OSRELEASE=5.8.0-43-genericPAGESIZE=4096SYMBOL(init_uts_ns)=ffffffffa5014620SYMBOL(node_online_map)=ffffffffa5276720SYMBOL(swapper_pg_dir)=ffffffffa500a000SYMBOL(_stext)=ffffffffa3a00000SYMBOL(vmap_area_list)=ffffffffa50f2560SYMBOL(mem_section)=ffff91673ffd2000LENGTH(mem_section)=2048SIZE(mem_section)=16OFFSET(mem_section.section_mem_map)=0SIZE(page)=64SIZE(pglist_data)=171968SIZE(zone)=1472SIZE(free_area)=88...CRASHTIME=1623937823

准备 cmdline (运行在 normal kernel)区块链技术企业 https://www.110btc.com/qukuai

准备好的 elf header 数据怎么传递给 crash kernel 呢?是透过 cmdline 来展开传递的:

kexec_file_load → kimage_file_alloc_init → kimage_file_prepare_segments → arch_kexec_kernel_image_load → image->fops->load → kexec_bzImage64_ops.load → bzImage64_load → setup_cmdline:staticintsetup_cmdline(struct kimage *image, struct boot_params *params,unsignedlongbootparams_load_addr,unsignedlongcmdline_offset,char*cmdline,unsignedlongcmdline_len){char*cmdline_ptr = ((char*)params) + cmdline_offset;unsignedlongcmdline_ptr_phys, len =0;uint32_tcmdline_low_32, cmdline_ext_32;/* (1) 在 crask kernel 的 cmdline 中追加模块:"elfcorehdr=0x%lx " */if(image->type == KEXEC_TYPE_CRASH) {len =sprintf(cmdline_ptr,"elfcorehdr=0x%lx ", image->arch.elf_load_addr);}memcpy(cmdline_ptr + len, cmdline, cmdline_len);cmdline_len += len;cmdline_ptr[cmdline_len -1] =\0;pr_debug("Final command line is: %s\n", cmdline_ptr);cmdline_ptr_phys = bootparams_load_addr + cmdline_offset;cmdline_low_32 = cmdline_ptr_phys &0xffffffffUL;cmdline_ext_32 = cmdline_ptr_phys >>32;params->hdr.cmd_line_ptr = cmdline_low_32;if(cmdline_ext_32)params->ext_cmd_line_ptr = cmdline_ext_32;return0;}

启动 crash kernel (运行在 normal kernel)

在 normal kernel 出现 panic 以后会 重定向到 carsh kernel:

die→ crash_kexec → __crash_kexec → machine_kexec

接收 elfheadr (运行在 crash kernel)

在 carsh kernel 中首先会接收到 normal kernel 在 cmdline 中传递过来的 vmcore 文档的 elf header 重要信息:

static int __init setup_elfcorehdr(char*arg){char*end;if(!arg)return-EINVAL;elfcorehdr_addr = memparse(arg, &end);if(*end==@) {elfcorehdr_size = elfcorehdr_addr;elfcorehdr_addr = memparse(end+1, &end);}returnend>arg?0: -EINVAL;}early_param("elfcorehdr", setup_elfcorehdr);

解析整理 elfheadr (运行在 crash kernel)

然后会读取 vmcore 文档的 elf header 重要信息,并展开解析和整理:

staticint__initvmcore_init(void){intrc =0;/* Allow architectures to allocate ELF header in 2nd kernel */rc = elfcorehdr_alloc(&elfcorehdr_addr, &elfcorehdr_size);if(rc)returnrc;/** If elfcorehdr= has been passed in cmdline or created in 2nd kernel,* then capture the dump.*/if(!(is_vmcore_usable))returnrc;/* (1) 解析 normal kernel 传递过来的 elf header 重要信息 */rc = parse_crash_elf_headers;if(rc) {pr_warn("Kdump: vmcore not initialized\n");returnrc;}elfcorehdr_free(elfcorehdr_addr);elfcorehdr_addr = ELFCORE_ADDR_ERR;/* (2) 创建 /proc/vmcore 文档接口 */proc_vmcore = proc_create("vmcore", S_IRUSR, , &vmcore_proc_ops);if(proc_vmcore)proc_vmcore->size = vmcore_size;return0;}fs_initcall(vmcore_init);↓parse_crash_elf_headers↓staticint__initparse_crash_elf64_headers(void){intrc=0;Elf64_Ehdr ehdr;u64 addr;addr = elfcorehdr_addr;/* Read Elf header *//* (1.1) 读出传递过来的 elf header 重要信息注意:涉及到读另两个控制系统的缓存,我们须要对物理地址展开ioremap_cache 建立映射以后才能读取先期的很多地方都是以这种方式来读取*/rc = elfcorehdr_read((char*)&ehdr,sizeof(Elf64_Ehdr), &addr);if(rc <0)returnrc;/* Do some basic Verification. *//* (1.2) 对读出的 elf header 重要信息展开一些合法性判断,防止被破坏 */if(memcmp(ehdr.e_ident, ELFMAG, SELFMAG) !=0||(ehdr.e_type != ET_CORE) ||!vmcore_elf64_check_arch(&ehdr) ||ehdr.e_ident[EI_CLASS] != ELFCLASS64 ||ehdr.e_ident[EI_VERSION] != EV_CURRENT ||ehdr.e_version != EV_CURRENT ||ehdr.e_ehsize !=sizeof(Elf64_Ehdr) ||ehdr.e_phentsize !=sizeof(Elf64_Phdr) ||ehdr.e_phnum ==0) {pr_warn("Warning: Core image elf header is not sane\n");return-EINVAL;}/* Read in all elf headers. *//* (1.3) 在crash kernel 上分配两个buffer,准备吧数据读到本地elfcorebuf 用以存储 elf header + elf program headerelfnotes_buf 用以存储 PT_NOTE segment*/elfcorebuf_sz_orig =sizeof(Elf64_Ehdr) +ehdr.e_phnum *sizeof(Elf64_Phdr);elfcorebuf_sz = elfcorebuf_sz_orig;elfcorebuf = (void*)__get_free_pages(GFP_KERNEL | __GFP_ZERO,get_order(elfcorebuf_sz_orig));if(!elfcorebuf)return-ENOMEM;addr = elfcorehdr_addr;/* (1.4) 把整座 elf header + elf program header 读取到 elfcorebuf */rc = elfcorehdr_read(elfcorebuf, elfcorebuf_sz_orig, &addr);if(rc <0)gotofail;/* Merge all PT_NOTE headers into one. *//* (1.5) 整理数据把多个 PT_NOTE 合并成两个,因此把 PT_NOTE 数据拷贝到 elfnotes_buf */rc = merge_note_headers_elf64(elfcorebuf, &elfcorebuf_sz,&elfnotes_buf, &elfnotes_sz);if(rc)gotofail;/* (1.6) 逐个增容 PT_LOAD segment 控制头,让每个 segment 符合 page 对齐 */rc = process_ptload_program_headers_elf64(elfcorebuf, elfcorebuf_sz,elfnotes_sz, &vmcore_list);if(rc)gotofail;/* (1.7) 配合上一步的 page 对齐调整,计算 vmcore_list 链表中的 offset 偏移 */set_vmcore_list_offsets(elfcorebuf_sz, elfnotes_sz, &vmcore_list);return0;fail:free_elfcorebuf;returnrc;}↓staticint__initmerge_note_headers_elf64(char*elfptr,size_t*elfsz,char**notes_buf,size_t*notes_sz){inti, nr_ptnote=0, rc=0;char*tmp;Elf64_Ehdr *ehdr_ptr;Elf64_Phdr phdr;u64 phdr_sz =0, note_off;ehdr_ptr = (Elf64_Ehdr *)elfptr;/* (1.5.1) 更新每个独立 PT_NOTE 的长度,去除尾部全零 elf_note */rc = update_note_header_size_elf64(ehdr_ptr);if(rc <0)returnrc;/* (1.5.2) 计算 大部份 PT_NOTE 数据加起来的总长度 */rc = get_note_number_and_size_elf64(ehdr_ptr, &nr_ptnote, &phdr_sz);if(rc <0)returnrc;*notes_sz = roundup(phdr_sz, PAGE_SIZE);*notes_buf = vmcore_alloc_buf(*notes_sz);if(!*notes_buf)return-ENOMEM;/* (1.5.3) 把大部份 PT_NOTE 数据拷贝到一起,拷贝到 notes_buf 中 */rc = copy_notes_elf64(ehdr_ptr, *notes_buf);if(rc <0)returnrc;/* Prepare merged PT_NOTE program header. *//* (1.5.4) 创建两个新的 PT_NOTE 控制结构来寻址 notes_buf */phdr.p_type = PT_NOTE;phdr.p_flags =0;note_off =sizeof(Elf64_Ehdr) +(ehdr_ptr->e_phnum - nr_ptnote +1) *sizeof(Elf64_Phdr);phdr.p_offset = roundup(note_off, PAGE_SIZE);phdr.p_vaddr = phdr.p_paddr =0;phdr.p_filesz = phdr.p_memsz = phdr_sz;phdr.p_align =0;/* Add merged PT_NOTE program header*//* (1.5.5) 拷贝新的 PT_NOTE 控制结构 */tmp = elfptr +sizeof(Elf64_Ehdr);memcpy(tmp, &phdr,sizeof(phdr));tmp +=sizeof(phdr);/* Remove unwanted PT_NOTE program headers. *//* (1.5.6) 移除掉已经无用的 PT_NOTE 控制结构 */i = (nr_ptnote -1) *sizeof(Elf64_Phdr);*elfsz = *elfsz - i;memmove(tmp, tmp+i, ((*elfsz)-sizeof(Elf64_Ehdr)-sizeof(Elf64_Phdr)));memset(elfptr + *elfsz,0, i);*elfsz = roundup(*elfsz, PAGE_SIZE);/* Modify e_phnum to reflect merged headers. */ehdr_ptr->e_phnum = ehdr_ptr->e_phnum - nr_ptnote +1;/* Store the size of all notes. We need this to update the note* header when the device dumps will be added.*/elfnotes_orig_sz = phdr.p_memsz;return0;}

读取 elf core (运行在 crash kernel)

经过上一节的解析 elf 头数据基本已准备好,elfcorebuf 用以存储 elf header + elf program header,elfnotes_buf 用以存储 PT_NOTE segment。

那时能透过对 /proc/vmcore 文档的读操作来读取 elf core 数据了:

staticconststructproc_opsvmcore_proc_ops = {.proc_read = read_vmcore,.proc_lseek = default_llseek,.proc_mmap = mmap_vmcore,};↓read_vmcore↓staticssize_t __read_vmcore(char*buffer, size_t buflen, loff_t *fpos,int userbuf){ssize_t acc =0, tmp;size_t tsz;u64start;structvmcore*m = ;if(buflen ==0|| *fpos >= vmcore_size)return0;/* trim buflen to not go beyond EOF */if(buflen > vmcore_size - *fpos)buflen = vmcore_size - *fpos;/* Read ELF core header *//* (1) 从 elfcorebuf 中读取 elf header + elf program header,并拷贝给给采用者态读缓存 */if(*fpos < elfcorebuf_sz) {tsz = min(elfcorebuf_sz - (size_t)*fpos, buflen);if(copy_to(buffer, elfcorebuf + *fpos, tsz, userbuf))return-EFAULT;buflen -= tsz;*fpos += tsz;buffer += tsz;acc += tsz;/* leave now if filled buffer already */if(buflen ==0)returnacc;}/* Read Elf note segment *//* (2) 从 elfnotes_buf 中读取 PT_NOTE segment,并拷贝给给采用者态读缓存 */if(*fpos < elfcorebuf_sz + elfnotes_sz) {void *kaddr;/* We add device dumps before other elf notes because the* other elf notes may not fill the elf notes buffer* completely and we will end up with zero-filled data* between the elf notes and the device dumps. Tools will* then try to decode this zero-filled data as valid notes* and we dont want that. Hence, adding device dumps before* the other elf notes ensure that zero-filled data can be* avoided.*/ifdef CONFIG_PROC_VMCORE_DEVICE_DUMP/* Read device dumps */if(*fpos < elfcorebuf_sz + vmcoredd_orig_sz) {tsz = min(elfcorebuf_sz + vmcoredd_orig_sz -(size_t)*fpos, buflen);start = *fpos - elfcorebuf_sz;if(vmcoredd_copy_dumps(buffer, start, tsz, userbuf))return-EFAULT;buflen -= tsz;*fpos += tsz;buffer += tsz;acc += tsz;/* leave now if filled buffer already */if(!buflen)returnacc;}endif/* CONFIG_PROC_VMCORE_DEVICE_DUMP *//* Read remaining elf notes */tsz = min(elfcorebuf_sz + elfnotes_sz - (size_t)*fpos, buflen);kaddr = elfnotes_buf + *fpos - elfcorebuf_sz - vmcoredd_orig_sz;if(copy_to(buffer, kaddr, tsz, userbuf))return-EFAULT;buflen -= tsz;*fpos += tsz;buffer += tsz;acc += tsz;/* leave now if filled buffer already */if(buflen ==0)returnacc;}/* (3) 从 vmcore_list 链表中读取 PT_LOAD segment,并拷贝给给采用者态读缓存对物理地址展开ioremap_cache 建立映射以后才能读取*/list_for_each_entry(m, &vmcore_list, list) {if(*fpos < m->offset + m->size) {tsz = (size_t)min_t(unsigned long long,m->offset + m->size - *fpos,buflen);start = m->paddr + *fpos - m->offset;tmp = read_from_oldmem(buffer, tsz, &start,userbuf, mem_encrypt_active);if(tmp <0)returntmp;buflen -= tsz;*fpos += tsz;buffer += tsz;acc += tsz;/* leave now if filled buffer already */if(buflen ==0)returnacc;}}returnacc;}

scatter

本文来自网络,不代表币圈之家立场,如有侵权请联系我们删除,转载请注明出处:https://www.110btc.com/qukuai/8739.html

联系我们

在线咨询:点击这里给我发消息

微信号:AB100082

工作日:9:30-18:30,节假日休息