1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
|
From ec3999e205ccadbeb8ab1f8420dea02fee2b5a5d Mon Sep 17 00:00:00 2001
From: Jan Beulich <jbeulich@suse.com>
Date: Tue, 24 Sep 2024 14:43:02 +0200
Subject: [PATCH 32/35] x86/HVM: properly reject "indirect" VRAM writes
While ->count will only be different from 1 for "indirect" (data in
guest memory) accesses, it being 1 does not exclude the request being an
"indirect" one. Check both to be on the safe side, and bring the ->count
part also in line with what ioreq_send_buffered() actually refuses to
handle.
Fixes: 3bbaaec09b1b ("x86/hvm: unify stdvga mmio intercept with standard mmio intercept")
Signed-off-by: Jan Beulich <jbeulich@suse.com>
Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>
master commit: eb7cd0593d88c4b967a24bca8bd30591966676cd
master date: 2024-09-12 09:13:04 +0200
---
xen/arch/x86/hvm/stdvga.c | 6 +++---
1 file changed, 3 insertions(+), 3 deletions(-)
diff --git a/xen/arch/x86/hvm/stdvga.c b/xen/arch/x86/hvm/stdvga.c
index b16c59f772..5f02d88615 100644
--- a/xen/arch/x86/hvm/stdvga.c
+++ b/xen/arch/x86/hvm/stdvga.c
@@ -530,14 +530,14 @@ static bool cf_check stdvga_mem_accept(
spin_lock(&s->lock);
- if ( p->dir == IOREQ_WRITE && p->count > 1 )
+ if ( p->dir == IOREQ_WRITE && (p->data_is_ptr || p->count != 1) )
{
/*
* We cannot return X86EMUL_UNHANDLEABLE on anything other then the
* first cycle of an I/O. So, since we cannot guarantee to always be
* able to send buffered writes, we have to reject any multi-cycle
- * I/O and, since we are rejecting an I/O, we must invalidate the
- * cache.
+ * or "indirect" I/O and, since we are rejecting an I/O, we must
+ * invalidate the cache.
* Single-cycle write transactions are accepted even if the cache is
* not active since we can assert, when in stdvga mode, that writes
* to VRAM have no side effect and thus we can try to buffer them.
--
2.46.1
|