Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Manually lower select in fmin/maxnum to fix #26333 #121716

Open
wants to merge 1 commit into
base: main
Choose a base branch
from

Conversation

andrewbriand
Copy link

@andrewbriand andrewbriand commented Jan 6, 2025

Fixes #26333 by manually lowering the select in combineFMinNumFMaxNum for scalar f32.

Copy link

github-actions bot commented Jan 6, 2025

Thank you for submitting a Pull Request (PR) to the LLVM Project!

This PR will be automatically labeled and the relevant teams will be notified.

If you wish to, you can add reviewers by using the "Reviewers" section on this page.

If this is not working for you, it is probably because you do not have write permissions for the repository. In which case you can instead tag reviewers by name in a comment by using @ followed by their GitHub username.

If you have received no comments on your PR for a week, you can request a review by "ping"ing the PR by adding a comment “Ping”. The common courtesy "ping" rate is once a week. Please remember that you are asking for valuable time from other developers.

If you have further questions, they may be answered by the LLVM GitHub User Guide.

You can also ask questions in a comment on this PR, on the LLVM Discord or on the forums.

@llvmbot
Copy link
Member

llvmbot commented Jan 6, 2025

@llvm/pr-subscribers-backend-x86

Author: Andrew Briand (andrewbriand)

Changes

Fixes https://bugs.llvm.org/show_bug.cgi?id=25959 by manually lowering the select in combineFMinNumFMaxNum for scalar f32.


Patch is 28.57 KiB, truncated to 20.00 KiB below, full version: https://github.com/llvm/llvm-project/pull/121716.diff

5 Files Affected:

  • (modified) llvm/lib/Target/X86/X86ISelLowering.cpp (+13-1)
  • (modified) llvm/test/CodeGen/X86/fmaxnum.ll (+12-16)
  • (modified) llvm/test/CodeGen/X86/fminnum.ll (+12-16)
  • (modified) llvm/test/CodeGen/X86/vector-reduce-fmax.ll (+98-118)
  • (modified) llvm/test/CodeGen/X86/vector-reduce-fmin.ll (+94-114)
diff --git a/llvm/lib/Target/X86/X86ISelLowering.cpp b/llvm/lib/Target/X86/X86ISelLowering.cpp
index 07b9a30b57564c..c811fc6dc07483 100644
--- a/llvm/lib/Target/X86/X86ISelLowering.cpp
+++ b/llvm/lib/Target/X86/X86ISelLowering.cpp
@@ -54500,10 +54500,22 @@ static SDValue combineFMinNumFMaxNum(SDNode *N, SelectionDAG &DAG,
 
   // If either operand is NaN, the 2nd source operand (Op0) is passed through.
   SDValue MinOrMax = DAG.getNode(MinMaxOp, DL, VT, Op1, Op0);
-  SDValue IsOp0Nan = DAG.getSetCC(DL, SetCCType, Op0, Op0, ISD::SETUO);
 
   // If Op0 is a NaN, select Op1. Otherwise, select the max. If both operands
   // are NaN, the NaN value of Op1 is the result.
+
+  // Manually lower f32 SSE Select to fix Bug 25959
+  if (!Subtarget.hasAVX() && VT == MVT::f32) {
+    unsigned SSECC = 3; // UNORD
+    SDValue IsOp0Nan = DAG.getNode(X86ISD::FSETCC, DL, VT, Op0, Op0,
+                                   DAG.getTargetConstant(SSECC, DL, MVT::i8));
+    SDValue AndN = DAG.getNode(X86ISD::FANDN, DL, VT, IsOp0Nan, MinOrMax);
+    SDValue And = DAG.getNode(X86ISD::FAND, DL, VT, Op1, IsOp0Nan);
+    return DAG.getNode(X86ISD::FOR, DL, VT, And, AndN);
+  }
+
+  SDValue IsOp0Nan = DAG.getSetCC(DL, SetCCType, Op0, Op0, ISD::SETUO);
+
   return DAG.getSelect(DL, VT, IsOp0Nan, Op1, MinOrMax);
 }
 
diff --git a/llvm/test/CodeGen/X86/fmaxnum.ll b/llvm/test/CodeGen/X86/fmaxnum.ll
index 2e1af1e84e0762..07d1e054b9b800 100644
--- a/llvm/test/CodeGen/X86/fmaxnum.ll
+++ b/llvm/test/CodeGen/X86/fmaxnum.ll
@@ -24,14 +24,12 @@ declare <8 x double> @llvm.maxnum.v8f64(<8 x double>, <8 x double>)
 define float @test_fmaxf(float %x, float %y) {
 ; SSE-LABEL: test_fmaxf:
 ; SSE:       # %bb.0:
-; SSE-NEXT:    movaps %xmm0, %xmm2
-; SSE-NEXT:    cmpunordss %xmm0, %xmm2
-; SSE-NEXT:    movaps %xmm2, %xmm3
-; SSE-NEXT:    andps %xmm1, %xmm3
-; SSE-NEXT:    maxss %xmm0, %xmm1
-; SSE-NEXT:    andnps %xmm1, %xmm2
-; SSE-NEXT:    orps %xmm3, %xmm2
-; SSE-NEXT:    movaps %xmm2, %xmm0
+; SSE-NEXT:    movaps %xmm1, %xmm2
+; SSE-NEXT:    maxss %xmm0, %xmm2
+; SSE-NEXT:    cmpunordss %xmm0, %xmm0
+; SSE-NEXT:    andps %xmm0, %xmm1
+; SSE-NEXT:    andnps %xmm2, %xmm0
+; SSE-NEXT:    orps %xmm1, %xmm0
 ; SSE-NEXT:    retq
 ;
 ; AVX1-LABEL: test_fmaxf:
@@ -113,14 +111,12 @@ define x86_fp80 @test_fmaxl(x86_fp80 %x, x86_fp80 %y) {
 define float @test_intrinsic_fmaxf(float %x, float %y) {
 ; SSE-LABEL: test_intrinsic_fmaxf:
 ; SSE:       # %bb.0:
-; SSE-NEXT:    movaps %xmm0, %xmm2
-; SSE-NEXT:    cmpunordss %xmm0, %xmm2
-; SSE-NEXT:    movaps %xmm2, %xmm3
-; SSE-NEXT:    andps %xmm1, %xmm3
-; SSE-NEXT:    maxss %xmm0, %xmm1
-; SSE-NEXT:    andnps %xmm1, %xmm2
-; SSE-NEXT:    orps %xmm3, %xmm2
-; SSE-NEXT:    movaps %xmm2, %xmm0
+; SSE-NEXT:    movaps %xmm1, %xmm2
+; SSE-NEXT:    maxss %xmm0, %xmm2
+; SSE-NEXT:    cmpunordss %xmm0, %xmm0
+; SSE-NEXT:    andps %xmm0, %xmm1
+; SSE-NEXT:    andnps %xmm2, %xmm0
+; SSE-NEXT:    orps %xmm1, %xmm0
 ; SSE-NEXT:    retq
 ;
 ; AVX1-LABEL: test_intrinsic_fmaxf:
diff --git a/llvm/test/CodeGen/X86/fminnum.ll b/llvm/test/CodeGen/X86/fminnum.ll
index 1290a7b8191067..6e66e5301179b2 100644
--- a/llvm/test/CodeGen/X86/fminnum.ll
+++ b/llvm/test/CodeGen/X86/fminnum.ll
@@ -24,14 +24,12 @@ declare <8 x double> @llvm.minnum.v8f64(<8 x double>, <8 x double>)
 define float @test_fminf(float %x, float %y) {
 ; SSE-LABEL: test_fminf:
 ; SSE:       # %bb.0:
-; SSE-NEXT:    movaps %xmm0, %xmm2
-; SSE-NEXT:    cmpunordss %xmm0, %xmm2
-; SSE-NEXT:    movaps %xmm2, %xmm3
-; SSE-NEXT:    andps %xmm1, %xmm3
-; SSE-NEXT:    minss %xmm0, %xmm1
-; SSE-NEXT:    andnps %xmm1, %xmm2
-; SSE-NEXT:    orps %xmm3, %xmm2
-; SSE-NEXT:    movaps %xmm2, %xmm0
+; SSE-NEXT:    movaps %xmm1, %xmm2
+; SSE-NEXT:    minss %xmm0, %xmm2
+; SSE-NEXT:    cmpunordss %xmm0, %xmm0
+; SSE-NEXT:    andps %xmm0, %xmm1
+; SSE-NEXT:    andnps %xmm2, %xmm0
+; SSE-NEXT:    orps %xmm1, %xmm0
 ; SSE-NEXT:    retq
 ;
 ; AVX1-LABEL: test_fminf:
@@ -113,14 +111,12 @@ define x86_fp80 @test_fminl(x86_fp80 %x, x86_fp80 %y) {
 define float @test_intrinsic_fminf(float %x, float %y) {
 ; SSE-LABEL: test_intrinsic_fminf:
 ; SSE:       # %bb.0:
-; SSE-NEXT:    movaps %xmm0, %xmm2
-; SSE-NEXT:    cmpunordss %xmm0, %xmm2
-; SSE-NEXT:    movaps %xmm2, %xmm3
-; SSE-NEXT:    andps %xmm1, %xmm3
-; SSE-NEXT:    minss %xmm0, %xmm1
-; SSE-NEXT:    andnps %xmm1, %xmm2
-; SSE-NEXT:    orps %xmm3, %xmm2
-; SSE-NEXT:    movaps %xmm2, %xmm0
+; SSE-NEXT:    movaps %xmm1, %xmm2
+; SSE-NEXT:    minss %xmm0, %xmm2
+; SSE-NEXT:    cmpunordss %xmm0, %xmm0
+; SSE-NEXT:    andps %xmm0, %xmm1
+; SSE-NEXT:    andnps %xmm2, %xmm0
+; SSE-NEXT:    orps %xmm1, %xmm0
 ; SSE-NEXT:    retq
 ;
 ; AVX1-LABEL: test_intrinsic_fminf:
diff --git a/llvm/test/CodeGen/X86/vector-reduce-fmax.ll b/llvm/test/CodeGen/X86/vector-reduce-fmax.ll
index fe2c41f57cfab1..d77c1a82a807b8 100644
--- a/llvm/test/CodeGen/X86/vector-reduce-fmax.ll
+++ b/llvm/test/CodeGen/X86/vector-reduce-fmax.ll
@@ -21,29 +21,25 @@ define float @test_v1f32(<1 x float> %a0) {
 define float @test_v2f32(<2 x float> %a0) {
 ; SSE2-LABEL: test_v2f32:
 ; SSE2:       # %bb.0:
-; SSE2-NEXT:    movaps %xmm0, %xmm2
-; SSE2-NEXT:    shufps {{.*#+}} xmm2 = xmm2[1,1],xmm0[1,1]
 ; SSE2-NEXT:    movaps %xmm0, %xmm1
-; SSE2-NEXT:    cmpunordss %xmm0, %xmm1
-; SSE2-NEXT:    movaps %xmm1, %xmm3
-; SSE2-NEXT:    andps %xmm2, %xmm3
-; SSE2-NEXT:    maxss %xmm0, %xmm2
-; SSE2-NEXT:    andnps %xmm2, %xmm1
-; SSE2-NEXT:    orps %xmm3, %xmm1
-; SSE2-NEXT:    movaps %xmm1, %xmm0
+; SSE2-NEXT:    shufps {{.*#+}} xmm1 = xmm1[1,1],xmm0[1,1]
+; SSE2-NEXT:    movaps %xmm1, %xmm2
+; SSE2-NEXT:    maxss %xmm0, %xmm1
+; SSE2-NEXT:    cmpunordss %xmm0, %xmm0
+; SSE2-NEXT:    andps %xmm0, %xmm2
+; SSE2-NEXT:    andnps %xmm1, %xmm0
+; SSE2-NEXT:    orps %xmm2, %xmm0
 ; SSE2-NEXT:    retq
 ;
 ; SSE41-LABEL: test_v2f32:
 ; SSE41:       # %bb.0:
-; SSE41-NEXT:    movshdup {{.*#+}} xmm2 = xmm0[1,1,3,3]
-; SSE41-NEXT:    movaps %xmm0, %xmm1
-; SSE41-NEXT:    cmpunordss %xmm0, %xmm1
-; SSE41-NEXT:    movaps %xmm1, %xmm3
-; SSE41-NEXT:    andps %xmm2, %xmm3
-; SSE41-NEXT:    maxss %xmm0, %xmm2
-; SSE41-NEXT:    andnps %xmm2, %xmm1
-; SSE41-NEXT:    orps %xmm3, %xmm1
-; SSE41-NEXT:    movaps %xmm1, %xmm0
+; SSE41-NEXT:    movshdup {{.*#+}} xmm1 = xmm0[1,1,3,3]
+; SSE41-NEXT:    movaps %xmm1, %xmm2
+; SSE41-NEXT:    maxss %xmm0, %xmm1
+; SSE41-NEXT:    cmpunordss %xmm0, %xmm0
+; SSE41-NEXT:    andps %xmm0, %xmm2
+; SSE41-NEXT:    andnps %xmm1, %xmm0
+; SSE41-NEXT:    orps %xmm2, %xmm0
 ; SSE41-NEXT:    retq
 ;
 ; AVX-LABEL: test_v2f32:
@@ -73,8 +69,8 @@ define float @test_v3f32(<3 x float> %a0) {
 ; SSE2-NEXT:    shufps {{.*#+}} xmm1 = xmm1[1,1],xmm0[1,1]
 ; SSE2-NEXT:    movaps %xmm0, %xmm2
 ; SSE2-NEXT:    cmpunordss %xmm0, %xmm2
-; SSE2-NEXT:    movaps %xmm2, %xmm3
-; SSE2-NEXT:    andps %xmm1, %xmm3
+; SSE2-NEXT:    movaps %xmm1, %xmm3
+; SSE2-NEXT:    andps %xmm2, %xmm3
 ; SSE2-NEXT:    maxss %xmm0, %xmm1
 ; SSE2-NEXT:    andnps %xmm1, %xmm2
 ; SSE2-NEXT:    orps %xmm3, %xmm2
@@ -92,8 +88,8 @@ define float @test_v3f32(<3 x float> %a0) {
 ; SSE41-NEXT:    movshdup {{.*#+}} xmm1 = xmm0[1,1,3,3]
 ; SSE41-NEXT:    movaps %xmm0, %xmm2
 ; SSE41-NEXT:    cmpunordss %xmm0, %xmm2
-; SSE41-NEXT:    movaps %xmm2, %xmm3
-; SSE41-NEXT:    andps %xmm1, %xmm3
+; SSE41-NEXT:    movaps %xmm1, %xmm3
+; SSE41-NEXT:    andps %xmm2, %xmm3
 ; SSE41-NEXT:    maxss %xmm0, %xmm1
 ; SSE41-NEXT:    andnps %xmm1, %xmm2
 ; SSE41-NEXT:    orps %xmm3, %xmm2
@@ -140,28 +136,26 @@ define float @test_v4f32(<4 x float> %a0) {
 ; SSE2-NEXT:    movaps %xmm0, %xmm2
 ; SSE2-NEXT:    movaps %xmm0, %xmm3
 ; SSE2-NEXT:    shufps {{.*#+}} xmm3 = xmm3[1,1],xmm0[1,1]
+; SSE2-NEXT:    movaps %xmm3, %xmm4
+; SSE2-NEXT:    maxss %xmm0, %xmm3
+; SSE2-NEXT:    shufps {{.*#+}} xmm1 = xmm1[3,3],xmm0[3,3]
+; SSE2-NEXT:    unpckhpd {{.*#+}} xmm2 = xmm2[1],xmm0[1]
 ; SSE2-NEXT:    cmpunordss %xmm0, %xmm0
-; SSE2-NEXT:    movaps %xmm0, %xmm4
-; SSE2-NEXT:    andps %xmm3, %xmm4
-; SSE2-NEXT:    maxss %xmm1, %xmm3
-; SSE2-NEXT:    shufps {{.*#+}} xmm1 = xmm1[3,3,3,3]
-; SSE2-NEXT:    movhlps {{.*#+}} xmm2 = xmm2[1,1]
+; SSE2-NEXT:    andps %xmm0, %xmm4
 ; SSE2-NEXT:    andnps %xmm3, %xmm0
 ; SSE2-NEXT:    orps %xmm4, %xmm0
 ; SSE2-NEXT:    movaps %xmm2, %xmm3
 ; SSE2-NEXT:    maxss %xmm0, %xmm3
 ; SSE2-NEXT:    cmpunordss %xmm0, %xmm0
-; SSE2-NEXT:    movaps %xmm0, %xmm4
-; SSE2-NEXT:    andnps %xmm3, %xmm4
-; SSE2-NEXT:    andps %xmm2, %xmm0
-; SSE2-NEXT:    orps %xmm4, %xmm0
+; SSE2-NEXT:    andps %xmm0, %xmm2
+; SSE2-NEXT:    andnps %xmm3, %xmm0
+; SSE2-NEXT:    orps %xmm2, %xmm0
 ; SSE2-NEXT:    movaps %xmm1, %xmm2
 ; SSE2-NEXT:    maxss %xmm0, %xmm2
 ; SSE2-NEXT:    cmpunordss %xmm0, %xmm0
-; SSE2-NEXT:    movaps %xmm0, %xmm3
-; SSE2-NEXT:    andnps %xmm2, %xmm3
-; SSE2-NEXT:    andps %xmm1, %xmm0
-; SSE2-NEXT:    orps %xmm3, %xmm0
+; SSE2-NEXT:    andps %xmm0, %xmm1
+; SSE2-NEXT:    andnps %xmm2, %xmm0
+; SSE2-NEXT:    orps %xmm1, %xmm0
 ; SSE2-NEXT:    retq
 ;
 ; SSE41-LABEL: test_v4f32:
@@ -169,28 +163,26 @@ define float @test_v4f32(<4 x float> %a0) {
 ; SSE41-NEXT:    movaps %xmm0, %xmm1
 ; SSE41-NEXT:    movaps %xmm0, %xmm2
 ; SSE41-NEXT:    movshdup {{.*#+}} xmm3 = xmm0[1,1,3,3]
+; SSE41-NEXT:    movaps %xmm3, %xmm4
+; SSE41-NEXT:    maxss %xmm0, %xmm3
+; SSE41-NEXT:    shufps {{.*#+}} xmm1 = xmm1[3,3],xmm0[3,3]
+; SSE41-NEXT:    unpckhpd {{.*#+}} xmm2 = xmm2[1],xmm0[1]
 ; SSE41-NEXT:    cmpunordss %xmm0, %xmm0
-; SSE41-NEXT:    movaps %xmm0, %xmm4
-; SSE41-NEXT:    andps %xmm3, %xmm4
-; SSE41-NEXT:    maxss %xmm1, %xmm3
-; SSE41-NEXT:    shufps {{.*#+}} xmm1 = xmm1[3,3,3,3]
-; SSE41-NEXT:    movhlps {{.*#+}} xmm2 = xmm2[1,1]
+; SSE41-NEXT:    andps %xmm0, %xmm4
 ; SSE41-NEXT:    andnps %xmm3, %xmm0
 ; SSE41-NEXT:    orps %xmm4, %xmm0
 ; SSE41-NEXT:    movaps %xmm2, %xmm3
 ; SSE41-NEXT:    maxss %xmm0, %xmm3
 ; SSE41-NEXT:    cmpunordss %xmm0, %xmm0
-; SSE41-NEXT:    movaps %xmm0, %xmm4
-; SSE41-NEXT:    andnps %xmm3, %xmm4
-; SSE41-NEXT:    andps %xmm2, %xmm0
-; SSE41-NEXT:    orps %xmm4, %xmm0
+; SSE41-NEXT:    andps %xmm0, %xmm2
+; SSE41-NEXT:    andnps %xmm3, %xmm0
+; SSE41-NEXT:    orps %xmm2, %xmm0
 ; SSE41-NEXT:    movaps %xmm1, %xmm2
 ; SSE41-NEXT:    maxss %xmm0, %xmm2
 ; SSE41-NEXT:    cmpunordss %xmm0, %xmm0
-; SSE41-NEXT:    movaps %xmm0, %xmm3
-; SSE41-NEXT:    andnps %xmm2, %xmm3
-; SSE41-NEXT:    andps %xmm1, %xmm0
-; SSE41-NEXT:    orps %xmm3, %xmm0
+; SSE41-NEXT:    andps %xmm0, %xmm1
+; SSE41-NEXT:    andnps %xmm2, %xmm0
+; SSE41-NEXT:    orps %xmm1, %xmm0
 ; SSE41-NEXT:    retq
 ;
 ; AVX-LABEL: test_v4f32:
@@ -237,31 +229,27 @@ define float @test_v8f32(<8 x float> %a0) {
 ; SSE2-NEXT:    andps %xmm0, %xmm1
 ; SSE2-NEXT:    andnps %xmm2, %xmm0
 ; SSE2-NEXT:    orps %xmm1, %xmm0
-; SSE2-NEXT:    pshufd {{.*#+}} xmm2 = xmm0[1,1,1,1]
-; SSE2-NEXT:    movaps %xmm0, %xmm1
-; SSE2-NEXT:    cmpunordss %xmm0, %xmm1
-; SSE2-NEXT:    movaps %xmm1, %xmm3
-; SSE2-NEXT:    andps %xmm2, %xmm3
-; SSE2-NEXT:    maxss %xmm0, %xmm2
-; SSE2-NEXT:    andnps %xmm2, %xmm1
-; SSE2-NEXT:    orps %xmm3, %xmm1
-; SSE2-NEXT:    pshufd {{.*#+}} xmm2 = xmm0[2,3,2,3]
-; SSE2-NEXT:    movdqa %xmm2, %xmm3
-; SSE2-NEXT:    maxss %xmm1, %xmm3
-; SSE2-NEXT:    cmpunordss %xmm1, %xmm1
-; SSE2-NEXT:    movaps %xmm1, %xmm4
-; SSE2-NEXT:    andnps %xmm3, %xmm4
-; SSE2-NEXT:    andps %xmm2, %xmm1
-; SSE2-NEXT:    orps %xmm4, %xmm1
-; SSE2-NEXT:    pshufd {{.*#+}} xmm0 = xmm0[3,3,3,3]
-; SSE2-NEXT:    movdqa %xmm0, %xmm2
-; SSE2-NEXT:    maxss %xmm1, %xmm2
-; SSE2-NEXT:    cmpunordss %xmm1, %xmm1
-; SSE2-NEXT:    movaps %xmm1, %xmm3
-; SSE2-NEXT:    andnps %xmm2, %xmm3
-; SSE2-NEXT:    andps %xmm0, %xmm1
-; SSE2-NEXT:    orps %xmm3, %xmm1
-; SSE2-NEXT:    movaps %xmm1, %xmm0
+; SSE2-NEXT:    pshufd {{.*#+}} xmm1 = xmm0[1,1,1,1]
+; SSE2-NEXT:    movdqa %xmm1, %xmm2
+; SSE2-NEXT:    maxss %xmm0, %xmm1
+; SSE2-NEXT:    pshufd {{.*#+}} xmm3 = xmm0[2,3,2,3]
+; SSE2-NEXT:    pshufd {{.*#+}} xmm4 = xmm0[3,3,3,3]
+; SSE2-NEXT:    cmpunordss %xmm0, %xmm0
+; SSE2-NEXT:    pand %xmm0, %xmm2
+; SSE2-NEXT:    andnps %xmm1, %xmm0
+; SSE2-NEXT:    orps %xmm2, %xmm0
+; SSE2-NEXT:    movdqa %xmm3, %xmm1
+; SSE2-NEXT:    maxss %xmm0, %xmm1
+; SSE2-NEXT:    cmpunordss %xmm0, %xmm0
+; SSE2-NEXT:    pand %xmm0, %xmm3
+; SSE2-NEXT:    andnps %xmm1, %xmm0
+; SSE2-NEXT:    orps %xmm3, %xmm0
+; SSE2-NEXT:    movdqa %xmm4, %xmm1
+; SSE2-NEXT:    maxss %xmm0, %xmm1
+; SSE2-NEXT:    cmpunordss %xmm0, %xmm0
+; SSE2-NEXT:    pand %xmm0, %xmm4
+; SSE2-NEXT:    andnps %xmm1, %xmm0
+; SSE2-NEXT:    orps %xmm4, %xmm0
 ; SSE2-NEXT:    retq
 ;
 ; SSE41-LABEL: test_v8f32:
@@ -273,8 +261,8 @@ define float @test_v8f32(<8 x float> %a0) {
 ; SSE41-NEXT:    movshdup {{.*#+}} xmm1 = xmm2[1,1,3,3]
 ; SSE41-NEXT:    movaps %xmm2, %xmm0
 ; SSE41-NEXT:    cmpunordss %xmm2, %xmm0
-; SSE41-NEXT:    movaps %xmm0, %xmm3
-; SSE41-NEXT:    andps %xmm1, %xmm3
+; SSE41-NEXT:    movaps %xmm1, %xmm3
+; SSE41-NEXT:    andps %xmm0, %xmm3
 ; SSE41-NEXT:    maxss %xmm2, %xmm1
 ; SSE41-NEXT:    andnps %xmm1, %xmm0
 ; SSE41-NEXT:    orps %xmm3, %xmm0
@@ -283,18 +271,16 @@ define float @test_v8f32(<8 x float> %a0) {
 ; SSE41-NEXT:    movaps %xmm1, %xmm3
 ; SSE41-NEXT:    maxss %xmm0, %xmm3
 ; SSE41-NEXT:    cmpunordss %xmm0, %xmm0
-; SSE41-NEXT:    movaps %xmm0, %xmm4
-; SSE41-NEXT:    andnps %xmm3, %xmm4
-; SSE41-NEXT:    andps %xmm1, %xmm0
-; SSE41-NEXT:    orps %xmm4, %xmm0
+; SSE41-NEXT:    andps %xmm0, %xmm1
+; SSE41-NEXT:    andnps %xmm3, %xmm0
+; SSE41-NEXT:    orps %xmm1, %xmm0
 ; SSE41-NEXT:    shufps {{.*#+}} xmm2 = xmm2[3,3,3,3]
 ; SSE41-NEXT:    movaps %xmm2, %xmm1
 ; SSE41-NEXT:    maxss %xmm0, %xmm1
 ; SSE41-NEXT:    cmpunordss %xmm0, %xmm0
-; SSE41-NEXT:    movaps %xmm0, %xmm3
-; SSE41-NEXT:    andnps %xmm1, %xmm3
-; SSE41-NEXT:    andps %xmm2, %xmm0
-; SSE41-NEXT:    orps %xmm3, %xmm0
+; SSE41-NEXT:    andps %xmm0, %xmm2
+; SSE41-NEXT:    andnps %xmm1, %xmm0
+; SSE41-NEXT:    orps %xmm2, %xmm0
 ; SSE41-NEXT:    retq
 ;
 ; AVX-LABEL: test_v8f32:
@@ -420,31 +406,27 @@ define float @test_v16f32(<16 x float> %a0) {
 ; SSE2-NEXT:    andps %xmm0, %xmm1
 ; SSE2-NEXT:    andnps %xmm2, %xmm0
 ; SSE2-NEXT:    orps %xmm1, %xmm0
-; SSE2-NEXT:    pshufd {{.*#+}} xmm2 = xmm0[1,1,1,1]
-; SSE2-NEXT:    movaps %xmm0, %xmm1
-; SSE2-NEXT:    cmpunordss %xmm0, %xmm1
-; SSE2-NEXT:    movaps %xmm1, %xmm3
-; SSE2-NEXT:    andps %xmm2, %xmm3
-; SSE2-NEXT:    maxss %xmm0, %xmm2
-; SSE2-NEXT:    andnps %xmm2, %xmm1
-; SSE2-NEXT:    orps %xmm3, %xmm1
-; SSE2-NEXT:    pshufd {{.*#+}} xmm2 = xmm0[2,3,2,3]
-; SSE2-NEXT:    movdqa %xmm2, %xmm3
-; SSE2-NEXT:    maxss %xmm1, %xmm3
-; SSE2-NEXT:    cmpunordss %xmm1, %xmm1
-; SSE2-NEXT:    movaps %xmm1, %xmm4
-; SSE2-NEXT:    andnps %xmm3, %xmm4
-; SSE2-NEXT:    andps %xmm2, %xmm1
-; SSE2-NEXT:    orps %xmm4, %xmm1
-; SSE2-NEXT:    pshufd {{.*#+}} xmm0 = xmm0[3,3,3,3]
-; SSE2-NEXT:    movdqa %xmm0, %xmm2
-; SSE2-NEXT:    maxss %xmm1, %xmm2
-; SSE2-NEXT:    cmpunordss %xmm1, %xmm1
-; SSE2-NEXT:    movaps %xmm1, %xmm3
-; SSE2-NEXT:    andnps %xmm2, %xmm3
-; SSE2-NEXT:    andps %xmm0, %xmm1
-; SSE2-NEXT:    orps %xmm3, %xmm1
-; SSE2-NEXT:    movaps %xmm1, %xmm0
+; SSE2-NEXT:    pshufd {{.*#+}} xmm1 = xmm0[1,1,1,1]
+; SSE2-NEXT:    movdqa %xmm1, %xmm2
+; SSE2-NEXT:    maxss %xmm0, %xmm1
+; SSE2-NEXT:    pshufd {{.*#+}} xmm3 = xmm0[2,3,2,3]
+; SSE2-NEXT:    pshufd {{.*#+}} xmm4 = xmm0[3,3,3,3]
+; SSE2-NEXT:    cmpunordss %xmm0, %xmm0
+; SSE2-NEXT:    pand %xmm0, %xmm2
+; SSE2-NEXT:    andnps %xmm1, %xmm0
+; SSE2-NEXT:    orps %xmm2, %xmm0
+; SSE2-NEXT:    movdqa %xmm3, %xmm1
+; SSE2-NEXT:    maxss %xmm0, %xmm1
+; SSE2-NEXT:    cmpunordss %xmm0, %xmm0
+; SSE2-NEXT:    pand %xmm0, %xmm3
+; SSE2-NEXT:    andnps %xmm1, %xmm0
+; SSE2-NEXT:    orps %xmm3, %xmm0
+; SSE2-NEXT:    movdqa %xmm4, %xmm1
+; SSE2-NEXT:    maxss %xmm0, %xmm1
+; SSE2-NEXT:    cmpunordss %xmm0, %xmm0
+; SSE2-NEXT:    pand %xmm0, %xmm4
+; SSE2-NEXT:    andnps %xmm1, %xmm0
+; SSE2-NEXT:    orps %xmm4, %xmm0
 ; SSE2-NEXT:    retq
 ;
 ; SSE41-LABEL: test_v16f32:
@@ -466,8 +448,8 @@ define float @test_v16f32(<16 x float> %a0) {
 ; SSE41-NEXT:    movshdup {{.*#+}} xmm2 = xmm1[1,1,3,3]
 ; SSE41-NEXT:    movaps %xmm1, %xmm0
 ; SSE41-NEXT:    cmpunordss %xmm1, %xmm0
-; SSE41-NEXT:    movaps %xmm0, %xmm3
-; SSE41-NEXT:    andps %xmm2, %xmm3
+; SSE41-NEXT:    movaps %xmm2, %xmm3
+; SSE41-NEXT:    andps %xmm0, %xmm3
 ; SSE41-NEXT:    maxss %xmm1, %xmm2
 ; SSE41-NEXT:    andnps %xmm2, %xmm0
 ; SSE41-NEXT:    orps %xmm3, %xmm0
@@ -476,18 +458,16 @@ define float @test_v16f32(<16 x float> %a0) {
 ; SSE41-NEXT:    movaps %xmm2, %xmm3
 ; SSE41-NEXT:    maxss %xmm0, %xmm3
 ; SSE41-NEXT:    cmpunordss %xmm0, %xmm0
-; SSE41-NEXT:    movaps %xmm0, %xmm4
-; SSE41-NEXT:    andnps %xmm3, %xmm4
-; SSE41-NEXT:    andps %xmm2, %xmm0
-; SSE41-NEXT:    orps %xmm4, %xmm0
+; SSE41-NEXT:    andps %xmm0, %xmm2
+; SSE41-NEXT:    andnps %xmm3, %xmm0
+; SSE41-NEXT:    orps %xmm2, %xmm0
 ; SSE41-NEXT:    shufps {{.*#+}} xmm1 = xmm1[3,3,3,3]
 ; SSE41-NEXT:    movaps %xmm1, %xmm2
 ; SSE41-NEXT:    maxss %xmm0, %xmm2
 ; SSE41-NEXT:    cmpunordss %xmm0, %xmm0
-; SSE41-NEXT:    movaps %xmm0, %xmm3
-; SSE41-NEXT:    andnps %xmm2, %xmm3
-; SSE41-NEXT:    andps %xmm1, %xmm0
-; SSE41-NEXT:    orps %xmm3, %xmm0
+; SSE41-NEXT:    andps %xmm0, %xmm1
+; SSE41-NEXT:    andnps %xmm2, %xmm0
+; SSE41-NEXT:    orps %xmm1, %xmm0
 ; SSE41-NEXT:    retq
 ;
 ; AVX-LABEL: test_v16f32:
diff --git a/llvm/test/CodeGen/X86/vector-reduce-fmin.ll b/llvm/test/CodeGen/X86/vector-reduce-fmin.ll
index 5ae9e552d0dcda..f6916f9e091176 100644
--- a/llvm/test/CodeGen/X86/vector-reduce-fmin.ll
+++ b/llvm/test/CodeGen/X86/vector-reduce-fmin.ll
@@ -21,29 +21,25 @@ define float @test_v1f32(<1 x float> %a0) {
 define float @test_v2f32(<2 x float> %a0) {
 ; SSE2-LABEL: test_v2f32:
 ; SSE2:       # %bb.0:
-; SSE2-NEXT:    movaps %xmm0, %xmm2
-; SSE2-NEXT:    shufps {{.*#+}} xmm2 = xmm2[1,1],xmm0[1,1]
 ; SSE2-NEXT:    movaps %xmm0, %xmm1
-; SSE2-NEXT:    cmpunordss %xmm0, %xmm1
-; SSE2-NEXT:    movaps %xmm1, %xmm3
-; SSE2-NEXT:    andps %xmm2, %xmm3
-; SSE2-NEXT:    minss %xmm0, %xmm2
-; SSE2-NEXT:    andnps %xmm2, %xmm1
-; SSE2-NEXT:    orps %xmm3, %xmm1
-; SSE2-NEXT:    movaps %xmm1, %xmm0
+; SSE2-NEXT:    shufps {{.*#+}} xmm1 = xmm1[1,1],xmm0[1,1]
+; SSE2-NEXT:    movaps %xmm1, %xmm2
+; SSE2-NEXT:    minss %xmm0, %xmm1
+; SSE2-NEXT:    cmpunordss %xmm0, %xmm0
+; SSE2-NEXT:    andps %xmm0, %xmm2
+; SSE2-NEXT:    andnps %xmm1, %xmm0
+; SSE2-NEXT:    orps %xmm2, %xmm0
 ; SSE2-NEXT:    retq
 ;
 ; SSE41-LABEL: test_v2f32:
 ; SSE41:       # %bb.0:
-; SSE41-NEXT:    movshdup {{.*#+}} xmm2 = xmm0[1,1,3,3]
-; SSE41-NEXT:    movaps %xmm0, %xmm1
-; SSE41-NEXT:    cmpunordss %xmm0, %xmm1
-; SSE41-NEXT:    movaps %xmm1, %xmm3
-; SSE41-NEXT:    andps %xmm2, %xmm3
-; SSE41-NEXT:    minss %xmm0, %xmm2
-; SSE41-NEXT:    andnps %xmm2, %xmm1
-; SSE41-NEXT:    orps %xmm3, %xmm1
-; SSE41-NEXT:    movaps %xmm1, %xmm0
+; SSE41-NEXT:    movshdup {{.*#+}} xmm1 = xmm0[1,1,3,3]
+; SSE41-NEXT:    movaps %xmm1, %xmm2
+; SSE41-NEXT:    minss %xmm0, %xmm1
+; SSE41-NEXT:    cmpunordss %xmm0, %xmm0
+; SSE41-NEXT:    andps %xmm0, %xmm2
+; SSE41-NEXT:    andnps %xmm1, %xmm0
+; SSE41-NEXT:    orps %xmm2, %xmm0
 ; SSE41-NEXT:    retq
 ;
 ; AVX-LABEL: test_v2f32:
@@ -73,28 +69,26 @@ define float @test_v4f32(<4 x float> %a0) {
 ; SSE2-NEXT:    movaps %xmm0, %xmm2
 ; SSE2-NEXT:    movaps %xmm0, %xmm3
 ; SSE2-NEXT:    shufps {{.*#+}} xmm3 = xmm3[1,1],xmm0[1,1]
+; SSE2-NEXT:    movaps %xmm3, %xmm4
+; SSE2-NEXT:    minss %xmm0, %xmm3
+; SSE2-NEXT:    shufps {{.*#+}} xmm1 = xmm1[3,3],xmm0[3,3]
+; SSE2-NEXT:    unpckhpd {{.*#+}} xmm2 = xmm2[1],xmm0[1]
 ; SSE2-NEXT:    cmpunordss %xmm0, %xmm0
-; SSE2-NEXT:    movaps %xmm0, %xmm4
-; SSE2-NEXT:    andps %xmm3, %xmm4
-; SSE2-NEXT:    minss %xmm1, %xmm3
-; SSE2-NEXT:    shufps {{.*#+}} xmm1 = xmm1[3,3,3,3]
-; SSE2-NEXT:    movhlps {{.*#+}} xmm2 = xmm2[1,1]
+; SSE2-NEXT:    andps %xmm0, %xmm4
 ; SSE2-NEXT:    andnps %xmm3, %xmm0
 ; SSE2-NEXT:    orps %xmm4, %xmm0
 ; SSE2-NEXT:    movaps %xmm2, %xmm3
 ; SSE2-NEXT:    minss %xmm0, %xmm3
 ; SSE2-NEXT:    cmpunordss %xmm0, %xmm0
-; SSE2-NEXT:    movaps %xmm0, %xmm4
-; SSE2-NEXT:    andnps %xmm3, %xmm4
-; SSE2-NEXT:    andps %xmm2, %xmm0
-; SSE2-NEXT:    orps %xmm4, %xmm0
+; SSE2-NEXT:    andps %xmm0, %xmm2
+; SSE2-NEXT:    andnps %xmm3, %xmm0
+; SSE2-NEXT:    orps %xmm2, %xmm0
 ; SSE2-NEXT:    movaps %xmm1, %xmm2
 ; SSE2-NEXT:    minss %x...
[truncated]

@andrewbriand andrewbriand changed the title Manually lower select in fmin/maxnum to fix bug 25959 Manually lower select in fmin/maxnum to fix #26333 Jan 6, 2025
Copy link
Collaborator

@RKSimon RKSimon left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What's causing the original regression? Is it the lowering of the SELECT or SETCC node?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

[x86, SSE] suboptimal register allocation for scalar minnum/maxnum
3 participants