1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
|
From ce0a0cb0a74a909abf988f242aa228acdd2917fe Mon Sep 17 00:00:00 2001
From: =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>
Date: Wed, 26 Jun 2024 13:39:11 +0200
Subject: [PATCH 25/56] x86/irq: limit interrupt movement done by fixup_irqs()
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
The current check used in fixup_irqs() to decide whether to move around
interrupts is based on the affinity mask, but such mask can have all bits set,
and hence is unlikely to be a subset of the input mask. For example if an
interrupt has an affinity mask of all 1s, any input to fixup_irqs() that's not
an all set CPU mask would cause that interrupt to be shuffled around
unconditionally.
What fixup_irqs() care about is evacuating interrupts from CPUs not set on the
input CPU mask, and for that purpose it should check whether the interrupt is
assigned to a CPU not present in the input mask. Assume that ->arch.cpu_mask
is a subset of the ->affinity mask, and keep the current logic that resets the
->affinity mask if the interrupt has to be shuffled around.
Doing the affinity movement based on ->arch.cpu_mask requires removing the
special handling to ->arch.cpu_mask done for high priority vectors, otherwise
the adjustment done to cpu_mask makes them always skip the CPU interrupt
movement.
While there also adjust the comment as to the purpose of fixup_irqs().
Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>
Reviewed-by: Jan Beulich <jbeulich@suse.com>
master commit: c7564d7366d865cc407e3d64bca816d07edee174
master date: 2024-06-12 14:30:40 +0200
---
xen/arch/x86/include/asm/irq.h | 2 +-
xen/arch/x86/irq.c | 21 +++++++++++----------
2 files changed, 12 insertions(+), 11 deletions(-)
diff --git a/xen/arch/x86/include/asm/irq.h b/xen/arch/x86/include/asm/irq.h
index d7fb8ec7e8..71d4a8fc56 100644
--- a/xen/arch/x86/include/asm/irq.h
+++ b/xen/arch/x86/include/asm/irq.h
@@ -132,7 +132,7 @@ void free_domain_pirqs(struct domain *d);
int map_domain_emuirq_pirq(struct domain *d, int pirq, int emuirq);
int unmap_domain_pirq_emuirq(struct domain *d, int pirq);
-/* Reset irq affinities to match the given CPU mask. */
+/* Evacuate interrupts assigned to CPUs not present in the input CPU mask. */
void fixup_irqs(const cpumask_t *mask, bool verbose);
void fixup_eoi(void);
diff --git a/xen/arch/x86/irq.c b/xen/arch/x86/irq.c
index db14df93db..566331bec1 100644
--- a/xen/arch/x86/irq.c
+++ b/xen/arch/x86/irq.c
@@ -2529,7 +2529,7 @@ static int __init cf_check setup_dump_irqs(void)
}
__initcall(setup_dump_irqs);
-/* Reset irq affinities to match the given CPU mask. */
+/* Evacuate interrupts assigned to CPUs not present in the input CPU mask. */
void fixup_irqs(const cpumask_t *mask, bool verbose)
{
unsigned int irq;
@@ -2553,19 +2553,15 @@ void fixup_irqs(const cpumask_t *mask, bool verbose)
vector = irq_to_vector(irq);
if ( vector >= FIRST_HIPRIORITY_VECTOR &&
- vector <= LAST_HIPRIORITY_VECTOR )
+ vector <= LAST_HIPRIORITY_VECTOR &&
+ desc->handler == &no_irq_type )
{
- cpumask_and(desc->arch.cpu_mask, desc->arch.cpu_mask, mask);
-
/*
* This can in particular happen when parking secondary threads
* during boot and when the serial console wants to use a PCI IRQ.
*/
- if ( desc->handler == &no_irq_type )
- {
- spin_unlock(&desc->lock);
- continue;
- }
+ spin_unlock(&desc->lock);
+ continue;
}
if ( desc->arch.move_cleanup_count )
@@ -2586,7 +2582,12 @@ void fixup_irqs(const cpumask_t *mask, bool verbose)
affinity);
}
- if ( !desc->action || cpumask_subset(desc->affinity, mask) )
+ /*
+ * Avoid shuffling the interrupt around as long as current target CPUs
+ * are a subset of the input mask. What fixup_irqs() cares about is
+ * evacuating interrupts from CPUs not in the input mask.
+ */
+ if ( !desc->action || cpumask_subset(desc->arch.cpu_mask, mask) )
{
spin_unlock(&desc->lock);
continue;
--
2.45.2
|