Skip to content

Commit 5bcb4c4

Browse files
committed
[MSAN] Support load and stores of scalable vector types
This adds support for scalable vector types - at least far enough to get basic load and store cases working. It turns out that load/store without origin tracking already worked; I apparently got that working with one of the pre patches to use TypeSize utilities and didn't notice. The code changes here are required to enable origin tracking. For origin tracking, a 4 byte value - the origin - is broadcast into a shadow region whose size exactly matches the type being accessed. This origin is only written if the shadow value is non-zero. The details of how shadow is computed from the original value being stored aren't relevant for this patch. The code changes involve two related primitives. First, we need to be able to perform that broadcast into a scalable sized memory region. This requires the use of a loop, and appropriate bound. The fixed size case optimizes with larger stores and alignment; I did not bother with that for the scalable case for now. We can optimize this codepath later if desired. Second, we need a way to test if the shadow is zero. The mechanism for this in the code is to convert the shadow value into a scalar, and then zero check that. There's an assumption that this scalar is zero exactly when all elements of the shadow value are zero. As a result, we use an OR reduction on the scalable vector. This is analogous to how e.g. an array is handled. I landed a bunch of cleanup changes to remove other direct uses of the scalar conversion to convince myself there were no other undocumented invariants. Differential Revision: https://reviews.llvm.org/D146157
1 parent 434b0ba commit 5bcb4c4

File tree

2 files changed

+528
-1
lines changed

2 files changed

+528
-1
lines changed

llvm/lib/Transforms/Instrumentation/MemorySanitizer.cpp

Lines changed: 19 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1183,13 +1183,29 @@ struct MemorySanitizerVisitor : public InstVisitor<MemorySanitizerVisitor> {
11831183
/// Fill memory range with the given origin value.
11841184
void paintOrigin(IRBuilder<> &IRB, Value *Origin, Value *OriginPtr,
11851185
TypeSize TS, Align Alignment) {
1186-
unsigned Size = TS.getFixedValue();
11871186
const DataLayout &DL = F.getParent()->getDataLayout();
11881187
const Align IntptrAlignment = DL.getABITypeAlign(MS.IntptrTy);
11891188
unsigned IntptrSize = DL.getTypeStoreSize(MS.IntptrTy);
11901189
assert(IntptrAlignment >= kMinOriginAlignment);
11911190
assert(IntptrSize >= kOriginSize);
11921191

1192+
// Note: The loop based formation works for fixed length vectors too,
1193+
// however we prefer to unroll and specialize alignment below.
1194+
if (TS.isScalable()) {
1195+
Value *Size = IRB.CreateTypeSize(IRB.getInt32Ty(), TS);
1196+
Value *RoundUp = IRB.CreateAdd(Size, IRB.getInt32(kOriginSize - 1));
1197+
Value *End = IRB.CreateUDiv(RoundUp, IRB.getInt32(kOriginSize));
1198+
auto [InsertPt, Index] =
1199+
SplitBlockAndInsertSimpleForLoop(End, &*IRB.GetInsertPoint());
1200+
IRB.SetInsertPoint(InsertPt);
1201+
1202+
Value *GEP = IRB.CreateGEP(MS.OriginTy, OriginPtr, Index);
1203+
IRB.CreateAlignedStore(Origin, GEP, kMinOriginAlignment);
1204+
return;
1205+
}
1206+
1207+
unsigned Size = TS.getFixedValue();
1208+
11931209
unsigned Ofs = 0;
11941210
Align CurrentAlignment = Alignment;
11951211
if (Alignment >= IntptrAlignment && IntptrSize > kOriginSize) {
@@ -1575,6 +1591,8 @@ struct MemorySanitizerVisitor : public InstVisitor<MemorySanitizerVisitor> {
15751591
if (ArrayType *Array = dyn_cast<ArrayType>(V->getType()))
15761592
return collapseArrayShadow(Array, V, IRB);
15771593
if (isa<VectorType>(V->getType())) {
1594+
if (isa<ScalableVectorType>(V->getType()))
1595+
return convertShadowToScalar(IRB.CreateOrReduce(V), IRB);
15781596
unsigned BitWidth =
15791597
V->getType()->getPrimitiveSizeInBits().getFixedValue();
15801598
return IRB.CreateBitCast(V, IntegerType::get(*MS.C, BitWidth));

0 commit comments

Comments
 (0)