RISC-V: Refactor RVV machine modes

Current machine modes layout is hard to maintain && read && understand.

For a LMUL = 1 SI vector mode:
  1. VNx1SI mode when TARGET_MIN_VLEN = 32.
  2. VNx2SI mode when TARGET_MIN_VLEN = 64.
  3. VNx4SI mode when TARGET_MIN_VLEN = 128.

Such implementation produces redundant machine modes and thus redudant machine description patterns.

Now, this patch refactor machine modes into 3 follow formats:

  1. mask mode: RVVMF64BImode, RVVMF32BImode, ...., RVVM1BImode.
                RVVMF64BImode means such mask mode occupy 1/64 of a RVV M1 reg.
                RVVM1BImode size = LMUL = 1 reg.
  2. non-tuple vector modes:
                RVV<LMUL><BASE_MODE>: E.g. RVVMF8QImode = SEW = 8 && LMUL = MF8
  3. tuple vector modes:
                RVV<LMUL>x<NF><BASE_MODE>.

For example, for SEW = 16, LMUL = MF2 , int mode is always RVVMF4HImode, then adjust its size according to TARGET_MIN_VLEN.

Before this patch,  the machine description patterns: 17551
After this patch, the machine description patterns: 14132 =====> reduce 3K+ patterns.

Regression of gcc/g++ rv32/rv64 all passed.

Ok for trunk?

gcc/ChangeLog:

	* config/riscv/autovec.md
	(len_mask_gather_load<VNX16_QHSD:mode><VNX16_QHSDI:mode>):
	Refactor RVV machine modes.
	(len_mask_gather_load<VNX16_QHS:mode><VNX16_QHSI:mode>): Ditto.
	(len_mask_gather_load<VNX32_QHS:mode><VNX32_QHSI:mode>): Ditto.
	(len_mask_gather_load<VNX32_QH:mode><VNX32_QHI:mode>): Ditto.
	(len_mask_gather_load<VNX64_QH:mode><VNX64_QHI:mode>): Ditto.
	(len_mask_gather_load<mode><mode>): Ditto.
	(len_mask_gather_load<VNX64_Q:mode><VNX64_Q:mode>): Ditto.
	(len_mask_scatter_store<VNX16_QHSD:mode><VNX16_QHSDI:mode>): Ditto.
	(len_mask_scatter_store<VNX32_QHS:mode><VNX32_QHSI:mode>): Ditto.
	(len_mask_scatter_store<VNX16_QHS:mode><VNX16_QHSI:mode>): Ditto.
	(len_mask_scatter_store<VNX64_QH:mode><VNX64_QHI:mode>): Ditto.
	(len_mask_scatter_store<VNX32_QH:mode><VNX32_QHI:mode>): Ditto.
	(len_mask_scatter_store<mode><mode>): Ditto.
	(len_mask_scatter_store<VNX64_Q:mode><VNX64_Q:mode>): Ditto.
	* config/riscv/riscv-modes.def (VECTOR_BOOL_MODE): Ditto.
	(ADJUST_NUNITS): Ditto.
	(ADJUST_ALIGNMENT): Ditto.
	(ADJUST_BYTESIZE): Ditto.
	(ADJUST_PRECISION): Ditto.
	(RVV_MODES): Ditto.
	(RVV_WHOLE_MODES): Ditto.
	(RVV_FRACT_MODE): Ditto.
	(RVV_NF8_MODES): Ditto.
	(RVV_NF4_MODES): Ditto.
	(VECTOR_MODES_WITH_PREFIX): Ditto.
	(VECTOR_MODE_WITH_PREFIX): Ditto.
	(RVV_TUPLE_MODES): Ditto.
	(RVV_NF2_MODES): Ditto.
	(RVV_TUPLE_PARTIAL_MODES): Ditto.
	* config/riscv/riscv-v.cc (struct mode_vtype_group): Ditto.
	(ENTRY): Ditto.
	(TUPLE_ENTRY): Ditto.
	(get_vlmul): Ditto.
	(get_nf): Ditto.
	(get_ratio): Ditto.
	(preferred_simd_mode): Ditto.
	(autovectorize_vector_modes): Ditto.
	* config/riscv/riscv-vector-builtins.cc (DEF_RVV_TYPE): Ditto.
	* config/riscv/riscv-vector-builtins.def (DEF_RVV_TYPE): Ditto.
	(vbool64_t): Ditto.
	(vbool32_t): Ditto.
	(vbool16_t): Ditto.
	(vbool8_t): Ditto.
	(vbool4_t): Ditto.
	(vbool2_t): Ditto.
	(vbool1_t): Ditto.
	(vint8mf8_t): Ditto.
	(vuint8mf8_t): Ditto.
	(vint8mf4_t): Ditto.
	(vuint8mf4_t): Ditto.
	(vint8mf2_t): Ditto.
	(vuint8mf2_t): Ditto.
	(vint8m1_t): Ditto.
	(vuint8m1_t): Ditto.
	(vint8m2_t): Ditto.
	(vuint8m2_t): Ditto.
	(vint8m4_t): Ditto.
	(vuint8m4_t): Ditto.
	(vint8m8_t): Ditto.
	(vuint8m8_t): Ditto.
	(vint16mf4_t): Ditto.
	(vuint16mf4_t): Ditto.
	(vint16mf2_t): Ditto.
	(vuint16mf2_t): Ditto.
	(vint16m1_t): Ditto.
	(vuint16m1_t): Ditto.
	(vint16m2_t): Ditto.
	(vuint16m2_t): Ditto.
	(vint16m4_t): Ditto.
	(vuint16m4_t): Ditto.
	(vint16m8_t): Ditto.
	(vuint16m8_t): Ditto.
	(vint32mf2_t): Ditto.
	(vuint32mf2_t): Ditto.
	(vint32m1_t): Ditto.
	(vuint32m1_t): Ditto.
	(vint32m2_t): Ditto.
	(vuint32m2_t): Ditto.
	(vint32m4_t): Ditto.
	(vuint32m4_t): Ditto.
	(vint32m8_t): Ditto.
	(vuint32m8_t): Ditto.
	(vint64m1_t): Ditto.
	(vuint64m1_t): Ditto.
	(vint64m2_t): Ditto.
	(vuint64m2_t): Ditto.
	(vint64m4_t): Ditto.
	(vuint64m4_t): Ditto.
	(vint64m8_t): Ditto.
	(vuint64m8_t): Ditto.
	(vfloat16mf4_t): Ditto.
	(vfloat16mf2_t): Ditto.
	(vfloat16m1_t): Ditto.
	(vfloat16m2_t): Ditto.
	(vfloat16m4_t): Ditto.
	(vfloat16m8_t): Ditto.
	(vfloat32mf2_t): Ditto.
	(vfloat32m1_t): Ditto.
	(vfloat32m2_t): Ditto.
	(vfloat32m4_t): Ditto.
	(vfloat32m8_t): Ditto.
	(vfloat64m1_t): Ditto.
	(vfloat64m2_t): Ditto.
	(vfloat64m4_t): Ditto.
	(vfloat64m8_t): Ditto.
	* config/riscv/riscv-vector-switch.def (ENTRY): Ditto.
	(TUPLE_ENTRY): Ditto.
	* config/riscv/riscv-vsetvl.cc (change_insn): Ditto.
	* config/riscv/riscv.cc (riscv_valid_lo_sum_p): Ditto.
	(riscv_v_adjust_nunits): Ditto.
	(riscv_v_adjust_bytesize): Ditto.
	(riscv_v_adjust_precision): Ditto.
	(riscv_convert_vector_bits): Ditto.
	* config/riscv/riscv.h (riscv_v_adjust_nunits): Ditto.
	* config/riscv/riscv.md: Ditto.
	* config/riscv/vector-iterators.md: Ditto.
	* config/riscv/vector.md
	(@pred_indexed_<order>store<VNX16_QHSD:mode><VNX16_QHSDI:mode>): Ditto.
	(@pred_indexed_<order>store<VNX16_QHS:mode><VNX16_QHSI:mode>): Ditto.
	(@pred_indexed_<order>store<VNX32_QHS:mode><VNX32_QHSI:mode>): Ditto.
	(@pred_indexed_<order>store<VNX32_QH:mode><VNX32_QHI:mode>): Ditto.
	(@pred_indexed_<order>store<VNX64_QH:mode><VNX64_QHI:mode>): Ditto.
	(@pred_indexed_<order>store<VNX64_Q:mode><VNX64_Q:mode>): Ditto.
	(@pred_indexed_<order>store<VNX128_Q:mode><VNX128_Q:mode>): Ditto.
	(@pred_indexed_<order>load<V1T:mode><V1I:mode>): Ditto.
	(@pred_indexed_<order>load<V1T:mode><VNX1_QHSDI:mode>): Ditto.
	(@pred_indexed_<order>load<V2T:mode><V2I:mode>): Ditto.
	(@pred_indexed_<order>load<V2T:mode><VNX2_QHSDI:mode>): Ditto.
	(@pred_indexed_<order>load<V4T:mode><V4I:mode>): Ditto.
	(@pred_indexed_<order>load<V4T:mode><VNX4_QHSDI:mode>): Ditto.
	(@pred_indexed_<order>load<V8T:mode><V8I:mode>): Ditto.
	(@pred_indexed_<order>load<V8T:mode><VNX8_QHSDI:mode>): Ditto.
	(@pred_indexed_<order>load<V16T:mode><V16I:mode>): Ditto.
	(@pred_indexed_<order>load<V16T:mode><VNX16_QHSI:mode>): Ditto.
	(@pred_indexed_<order>load<V32T:mode><V32I:mode>): Ditto.
	(@pred_indexed_<order>load<V32T:mode><VNX32_QHI:mode>): Ditto.
	(@pred_indexed_<order>load<V64T:mode><V64I:mode>): Ditto.
	(@pred_indexed_<order>store<V1T:mode><V1I:mode>): Ditto.
	(@pred_indexed_<order>store<V1T:mode><VNX1_QHSDI:mode>): Ditto.
	(@pred_indexed_<order>store<V2T:mode><V2I:mode>): Ditto.
	(@pred_indexed_<order>store<V2T:mode><VNX2_QHSDI:mode>): Ditto.
	(@pred_indexed_<order>store<V4T:mode><V4I:mode>): Ditto.
	(@pred_indexed_<order>store<V4T:mode><VNX4_QHSDI:mode>): Ditto.
	(@pred_indexed_<order>store<V8T:mode><V8I:mode>): Ditto.
	(@pred_indexed_<order>store<V8T:mode><VNX8_QHSDI:mode>): Ditto.
	(@pred_indexed_<order>store<V16T:mode><V16I:mode>): Ditto.
	(@pred_indexed_<order>store<V16T:mode><VNX16_QHSI:mode>): Ditto.
	(@pred_indexed_<order>store<V32T:mode><V32I:mode>): Ditto.
	(@pred_indexed_<order>store<V32T:mode><VNX32_QHI:mode>): Ditto.
	(@pred_indexed_<order>store<V64T:mode><V64I:mode>): Ditto.

gcc/testsuite/ChangeLog:

	* gcc.target/riscv/rvv/autovec/gather-scatter/gather_load_run-7.c:
	Adapt test.
	* gcc.target/riscv/rvv/autovec/gather-scatter/gather_load_run-8.c:
	Ditto.
	* gcc.target/riscv/rvv/autovec/gather-scatter/mask_scatter_store-9.c:
	Ditto.
	* gcc.target/riscv/rvv/autovec/gather-scatter/mask_scatter_store_run-8.c
	: Ditto.
	* gcc.target/riscv/rvv/autovec/gather-scatter/scatter_store_run-8.c:
	Ditto.
This commit is contained in:
Juzhe-Zhong 2023-07-20 07:21:20 +08:00 committed by Pan Li
parent 49bed11d96
commit 879c52c9da
17 changed files with 2463 additions and 2553 deletions

View file

@ -61,105 +61,90 @@
;; == Gather Load
;; =========================================================================
(define_expand "len_mask_gather_load<VNX1_QHSD:mode><VNX1_QHSDI:mode>"
[(match_operand:VNX1_QHSD 0 "register_operand")
(define_expand "len_mask_gather_load<RATIO64:mode><RATIO64I:mode>"
[(match_operand:RATIO64 0 "register_operand")
(match_operand 1 "pmode_reg_or_0_operand")
(match_operand:VNX1_QHSDI 2 "register_operand")
(match_operand 3 "<VNX1_QHSD:gs_extension>")
(match_operand 4 "<VNX1_QHSD:gs_scale>")
(match_operand:RATIO64I 2 "register_operand")
(match_operand 3 "<RATIO64:gs_extension>")
(match_operand 4 "<RATIO64:gs_scale>")
(match_operand 5 "autovec_length_operand")
(match_operand 6 "const_0_operand")
(match_operand:<VNX1_QHSD:VM> 7 "vector_mask_operand")]
(match_operand:<RATIO64:VM> 7 "vector_mask_operand")]
"TARGET_VECTOR"
{
riscv_vector::expand_gather_scatter (operands, true);
DONE;
})
(define_expand "len_mask_gather_load<VNX2_QHSD:mode><VNX2_QHSDI:mode>"
[(match_operand:VNX2_QHSD 0 "register_operand")
(define_expand "len_mask_gather_load<RATIO32:mode><RATIO32I:mode>"
[(match_operand:RATIO32 0 "register_operand")
(match_operand 1 "pmode_reg_or_0_operand")
(match_operand:VNX2_QHSDI 2 "register_operand")
(match_operand 3 "<VNX2_QHSD:gs_extension>")
(match_operand 4 "<VNX2_QHSD:gs_scale>")
(match_operand:RATIO32I 2 "register_operand")
(match_operand 3 "<RATIO32:gs_extension>")
(match_operand 4 "<RATIO32:gs_scale>")
(match_operand 5 "autovec_length_operand")
(match_operand 6 "const_0_operand")
(match_operand:<VNX2_QHSD:VM> 7 "vector_mask_operand")]
(match_operand:<RATIO32:VM> 7 "vector_mask_operand")]
"TARGET_VECTOR"
{
riscv_vector::expand_gather_scatter (operands, true);
DONE;
})
(define_expand "len_mask_gather_load<VNX4_QHSD:mode><VNX4_QHSDI:mode>"
[(match_operand:VNX4_QHSD 0 "register_operand")
(define_expand "len_mask_gather_load<RATIO16:mode><RATIO16I:mode>"
[(match_operand:RATIO16 0 "register_operand")
(match_operand 1 "pmode_reg_or_0_operand")
(match_operand:VNX4_QHSDI 2 "register_operand")
(match_operand 3 "<VNX4_QHSD:gs_extension>")
(match_operand 4 "<VNX4_QHSD:gs_scale>")
(match_operand:RATIO16I 2 "register_operand")
(match_operand 3 "<RATIO16:gs_extension>")
(match_operand 4 "<RATIO16:gs_scale>")
(match_operand 5 "autovec_length_operand")
(match_operand 6 "const_0_operand")
(match_operand:<VNX4_QHSD:VM> 7 "vector_mask_operand")]
(match_operand:<RATIO16:VM> 7 "vector_mask_operand")]
"TARGET_VECTOR"
{
riscv_vector::expand_gather_scatter (operands, true);
DONE;
})
(define_expand "len_mask_gather_load<VNX8_QHSD:mode><VNX8_QHSDI:mode>"
[(match_operand:VNX8_QHSD 0 "register_operand")
(define_expand "len_mask_gather_load<RATIO8:mode><RATIO8I:mode>"
[(match_operand:RATIO8 0 "register_operand")
(match_operand 1 "pmode_reg_or_0_operand")
(match_operand:VNX8_QHSDI 2 "register_operand")
(match_operand 3 "<VNX8_QHSD:gs_extension>")
(match_operand 4 "<VNX8_QHSD:gs_scale>")
(match_operand:RATIO8I 2 "register_operand")
(match_operand 3 "<RATIO8:gs_extension>")
(match_operand 4 "<RATIO8:gs_scale>")
(match_operand 5 "autovec_length_operand")
(match_operand 6 "const_0_operand")
(match_operand:<VNX8_QHSD:VM> 7 "vector_mask_operand")]
(match_operand:<RATIO8:VM> 7 "vector_mask_operand")]
"TARGET_VECTOR"
{
riscv_vector::expand_gather_scatter (operands, true);
DONE;
})
(define_expand "len_mask_gather_load<VNX16_QHSD:mode><VNX16_QHSDI:mode>"
[(match_operand:VNX16_QHSD 0 "register_operand")
(define_expand "len_mask_gather_load<RATIO4:mode><RATIO4I:mode>"
[(match_operand:RATIO4 0 "register_operand")
(match_operand 1 "pmode_reg_or_0_operand")
(match_operand:VNX16_QHSDI 2 "register_operand")
(match_operand 3 "<VNX16_QHSD:gs_extension>")
(match_operand 4 "<VNX16_QHSD:gs_scale>")
(match_operand:RATIO4I 2 "register_operand")
(match_operand 3 "<RATIO4:gs_extension>")
(match_operand 4 "<RATIO4:gs_scale>")
(match_operand 5 "autovec_length_operand")
(match_operand 6 "const_0_operand")
(match_operand:<VNX16_QHSD:VM> 7 "vector_mask_operand")]
(match_operand:<RATIO4:VM> 7 "vector_mask_operand")]
"TARGET_VECTOR"
{
riscv_vector::expand_gather_scatter (operands, true);
DONE;
})
(define_expand "len_mask_gather_load<VNX32_QHS:mode><VNX32_QHSI:mode>"
[(match_operand:VNX32_QHS 0 "register_operand")
(define_expand "len_mask_gather_load<RATIO2:mode><RATIO2I:mode>"
[(match_operand:RATIO2 0 "register_operand")
(match_operand 1 "pmode_reg_or_0_operand")
(match_operand:VNX32_QHSI 2 "register_operand")
(match_operand 3 "<VNX32_QHS:gs_extension>")
(match_operand 4 "<VNX32_QHS:gs_scale>")
(match_operand:RATIO2I 2 "register_operand")
(match_operand 3 "<RATIO2:gs_extension>")
(match_operand 4 "<RATIO2:gs_scale>")
(match_operand 5 "autovec_length_operand")
(match_operand 6 "const_0_operand")
(match_operand:<VNX32_QHS:VM> 7 "vector_mask_operand")]
"TARGET_VECTOR"
{
riscv_vector::expand_gather_scatter (operands, true);
DONE;
})
(define_expand "len_mask_gather_load<VNX64_QH:mode><VNX64_QHI:mode>"
[(match_operand:VNX64_QH 0 "register_operand")
(match_operand 1 "pmode_reg_or_0_operand")
(match_operand:VNX64_QHI 2 "register_operand")
(match_operand 3 "<VNX64_QH:gs_extension>")
(match_operand 4 "<VNX64_QH:gs_scale>")
(match_operand 5 "autovec_length_operand")
(match_operand 6 "const_0_operand")
(match_operand:<VNX64_QH:VM> 7 "vector_mask_operand")]
(match_operand:<RATIO2:VM> 7 "vector_mask_operand")]
"TARGET_VECTOR"
{
riscv_vector::expand_gather_scatter (operands, true);
@ -170,15 +155,15 @@
;; larger SEW. Since RVV indexed load/store support zero extend
;; implicitly and not support scaling, we should only allow
;; operands[3] and operands[4] to be const_1_operand.
(define_expand "len_mask_gather_load<mode><mode>"
[(match_operand:VNX128_Q 0 "register_operand")
(define_expand "len_mask_gather_load<RATIO1:mode><RATIO1:mode>"
[(match_operand:RATIO1 0 "register_operand")
(match_operand 1 "pmode_reg_or_0_operand")
(match_operand:VNX128_Q 2 "register_operand")
(match_operand 3 "const_1_operand")
(match_operand 4 "const_1_operand")
(match_operand:RATIO1 2 "register_operand")
(match_operand 3 "<RATIO1:gs_extension>")
(match_operand 4 "<RATIO1:gs_scale>")
(match_operand 5 "autovec_length_operand")
(match_operand 6 "const_0_operand")
(match_operand:<VM> 7 "vector_mask_operand")]
(match_operand:<RATIO1:VM> 7 "vector_mask_operand")]
"TARGET_VECTOR"
{
riscv_vector::expand_gather_scatter (operands, true);
@ -189,105 +174,90 @@
;; == Scatter Store
;; =========================================================================
(define_expand "len_mask_scatter_store<VNX1_QHSD:mode><VNX1_QHSDI:mode>"
(define_expand "len_mask_scatter_store<RATIO64:mode><RATIO64I:mode>"
[(match_operand 0 "pmode_reg_or_0_operand")
(match_operand:VNX1_QHSDI 1 "register_operand")
(match_operand 2 "<VNX1_QHSD:gs_extension>")
(match_operand 3 "<VNX1_QHSD:gs_scale>")
(match_operand:VNX1_QHSD 4 "register_operand")
(match_operand:RATIO64I 1 "register_operand")
(match_operand 2 "<RATIO64:gs_extension>")
(match_operand 3 "<RATIO64:gs_scale>")
(match_operand:RATIO64 4 "register_operand")
(match_operand 5 "autovec_length_operand")
(match_operand 6 "const_0_operand")
(match_operand:<VNX1_QHSD:VM> 7 "vector_mask_operand")]
(match_operand:<RATIO64:VM> 7 "vector_mask_operand")]
"TARGET_VECTOR"
{
riscv_vector::expand_gather_scatter (operands, false);
DONE;
})
(define_expand "len_mask_scatter_store<VNX2_QHSD:mode><VNX2_QHSDI:mode>"
(define_expand "len_mask_scatter_store<RATIO32:mode><RATIO32I:mode>"
[(match_operand 0 "pmode_reg_or_0_operand")
(match_operand:VNX2_QHSDI 1 "register_operand")
(match_operand 2 "<VNX2_QHSD:gs_extension>")
(match_operand 3 "<VNX2_QHSD:gs_scale>")
(match_operand:VNX2_QHSD 4 "register_operand")
(match_operand:RATIO32I 1 "register_operand")
(match_operand 2 "<RATIO32:gs_extension>")
(match_operand 3 "<RATIO32:gs_scale>")
(match_operand:RATIO32 4 "register_operand")
(match_operand 5 "autovec_length_operand")
(match_operand 6 "const_0_operand")
(match_operand:<VNX2_QHSD:VM> 7 "vector_mask_operand")]
(match_operand:<RATIO32:VM> 7 "vector_mask_operand")]
"TARGET_VECTOR"
{
riscv_vector::expand_gather_scatter (operands, false);
DONE;
})
(define_expand "len_mask_scatter_store<VNX4_QHSD:mode><VNX4_QHSDI:mode>"
(define_expand "len_mask_scatter_store<RATIO16:mode><RATIO16I:mode>"
[(match_operand 0 "pmode_reg_or_0_operand")
(match_operand:VNX4_QHSDI 1 "register_operand")
(match_operand 2 "<VNX4_QHSD:gs_extension>")
(match_operand 3 "<VNX4_QHSD:gs_scale>")
(match_operand:VNX4_QHSD 4 "register_operand")
(match_operand:RATIO16I 1 "register_operand")
(match_operand 2 "<RATIO16:gs_extension>")
(match_operand 3 "<RATIO16:gs_scale>")
(match_operand:RATIO16 4 "register_operand")
(match_operand 5 "autovec_length_operand")
(match_operand 6 "const_0_operand")
(match_operand:<VNX4_QHSD:VM> 7 "vector_mask_operand")]
(match_operand:<RATIO16:VM> 7 "vector_mask_operand")]
"TARGET_VECTOR"
{
riscv_vector::expand_gather_scatter (operands, false);
DONE;
})
(define_expand "len_mask_scatter_store<VNX8_QHSD:mode><VNX8_QHSDI:mode>"
(define_expand "len_mask_scatter_store<RATIO8:mode><RATIO8I:mode>"
[(match_operand 0 "pmode_reg_or_0_operand")
(match_operand:VNX8_QHSDI 1 "register_operand")
(match_operand 2 "<VNX8_QHSD:gs_extension>")
(match_operand 3 "<VNX8_QHSD:gs_scale>")
(match_operand:VNX8_QHSD 4 "register_operand")
(match_operand:RATIO8I 1 "register_operand")
(match_operand 2 "<RATIO8:gs_extension>")
(match_operand 3 "<RATIO8:gs_scale>")
(match_operand:RATIO8 4 "register_operand")
(match_operand 5 "autovec_length_operand")
(match_operand 6 "const_0_operand")
(match_operand:<VNX8_QHSD:VM> 7 "vector_mask_operand")]
(match_operand:<RATIO8:VM> 7 "vector_mask_operand")]
"TARGET_VECTOR"
{
riscv_vector::expand_gather_scatter (operands, false);
DONE;
})
(define_expand "len_mask_scatter_store<VNX16_QHSD:mode><VNX16_QHSDI:mode>"
(define_expand "len_mask_scatter_store<RATIO4:mode><RATIO4I:mode>"
[(match_operand 0 "pmode_reg_or_0_operand")
(match_operand:VNX16_QHSDI 1 "register_operand")
(match_operand 2 "<VNX16_QHSD:gs_extension>")
(match_operand 3 "<VNX16_QHSD:gs_scale>")
(match_operand:VNX16_QHSD 4 "register_operand")
(match_operand:RATIO4I 1 "register_operand")
(match_operand 2 "<RATIO4:gs_extension>")
(match_operand 3 "<RATIO4:gs_scale>")
(match_operand:RATIO4 4 "register_operand")
(match_operand 5 "autovec_length_operand")
(match_operand 6 "const_0_operand")
(match_operand:<VNX16_QHSD:VM> 7 "vector_mask_operand")]
(match_operand:<RATIO4:VM> 7 "vector_mask_operand")]
"TARGET_VECTOR"
{
riscv_vector::expand_gather_scatter (operands, false);
DONE;
})
(define_expand "len_mask_scatter_store<VNX32_QHS:mode><VNX32_QHSI:mode>"
(define_expand "len_mask_scatter_store<RATIO2:mode><RATIO2I:mode>"
[(match_operand 0 "pmode_reg_or_0_operand")
(match_operand:VNX32_QHSI 1 "register_operand")
(match_operand 2 "<VNX32_QHS:gs_extension>")
(match_operand 3 "<VNX32_QHS:gs_scale>")
(match_operand:VNX32_QHS 4 "register_operand")
(match_operand:RATIO2I 1 "register_operand")
(match_operand 2 "<RATIO2:gs_extension>")
(match_operand 3 "<RATIO2:gs_scale>")
(match_operand:RATIO2 4 "register_operand")
(match_operand 5 "autovec_length_operand")
(match_operand 6 "const_0_operand")
(match_operand:<VNX32_QHS:VM> 7 "vector_mask_operand")]
"TARGET_VECTOR"
{
riscv_vector::expand_gather_scatter (operands, false);
DONE;
})
(define_expand "len_mask_scatter_store<VNX64_QH:mode><VNX64_QHI:mode>"
[(match_operand 0 "pmode_reg_or_0_operand")
(match_operand:VNX64_QHI 1 "register_operand")
(match_operand 2 "<VNX64_QH:gs_extension>")
(match_operand 3 "<VNX64_QH:gs_scale>")
(match_operand:VNX64_QH 4 "register_operand")
(match_operand 5 "autovec_length_operand")
(match_operand 6 "const_0_operand")
(match_operand:<VNX64_QH:VM> 7 "vector_mask_operand")]
(match_operand:<RATIO2:VM> 7 "vector_mask_operand")]
"TARGET_VECTOR"
{
riscv_vector::expand_gather_scatter (operands, false);
@ -298,15 +268,15 @@
;; larger SEW. Since RVV indexed load/store support zero extend
;; implicitly and not support scaling, we should only allow
;; operands[3] and operands[4] to be const_1_operand.
(define_expand "len_mask_scatter_store<mode><mode>"
(define_expand "len_mask_scatter_store<RATIO1:mode><RATIO1:mode>"
[(match_operand 0 "pmode_reg_or_0_operand")
(match_operand:VNX128_Q 1 "register_operand")
(match_operand 2 "const_1_operand")
(match_operand 3 "const_1_operand")
(match_operand:VNX128_Q 4 "register_operand")
(match_operand:RATIO1 1 "register_operand")
(match_operand 2 "<RATIO1:gs_extension>")
(match_operand 3 "<RATIO1:gs_scale>")
(match_operand:RATIO1 4 "register_operand")
(match_operand 5 "autovec_length_operand")
(match_operand 6 "const_0_operand")
(match_operand:<VM> 7 "vector_mask_operand")]
(match_operand:<RATIO1:VM> 7 "vector_mask_operand")]
"TARGET_VECTOR"
{
riscv_vector::expand_gather_scatter (operands, false);

View file

@ -27,311 +27,287 @@ FLOAT_MODE (TF, 16, ieee_quad_format);
/* Encode the ratio of SEW/LMUL into the mask types. There are the following
* mask types. */
/* | Mode | MIN_VLEN = 32 | MIN_VLEN = 64 | MIN_VLEN = 128 |
| | SEW/LMUL | SEW/LMUL | SEW/LMUL |
| VNx1BI | 32 | 64 | 128 |
| VNx2BI | 16 | 32 | 64 |
| VNx4BI | 8 | 16 | 32 |
| VNx8BI | 4 | 8 | 16 |
| VNx16BI | 2 | 4 | 8 |
| VNx32BI | 1 | 2 | 4 |
| VNx64BI | N/A | 1 | 2 |
| VNx128BI | N/A | N/A | 1 | */
/* Encode the ratio of SEW/LMUL into the mask types.
There are the following mask types.
n = SEW/LMUL
|Modes| n = 1 | n = 2 | n = 4 | n = 8 | n = 16 | n = 32 | n = 64 |
|BI |RVVM1BI |RVVMF2BI |RVVMF4BI |RVVMF8BI |RVVMF16BI |RVVMF32BI |RVVMF64BI | */
/* For RVV modes, each boolean value occupies 1-bit.
4th argument is specify the minmial possible size of the vector mode,
and will adjust to the right size by ADJUST_BYTESIZE. */
VECTOR_BOOL_MODE (VNx1BI, 1, BI, 1);
VECTOR_BOOL_MODE (VNx2BI, 2, BI, 1);
VECTOR_BOOL_MODE (VNx4BI, 4, BI, 1);
VECTOR_BOOL_MODE (VNx8BI, 8, BI, 1);
VECTOR_BOOL_MODE (VNx16BI, 16, BI, 2);
VECTOR_BOOL_MODE (VNx32BI, 32, BI, 4);
VECTOR_BOOL_MODE (VNx64BI, 64, BI, 8);
VECTOR_BOOL_MODE (VNx128BI, 128, BI, 16);
VECTOR_BOOL_MODE (RVVM1BI, 64, BI, 8);
VECTOR_BOOL_MODE (RVVMF2BI, 32, BI, 4);
VECTOR_BOOL_MODE (RVVMF4BI, 16, BI, 2);
VECTOR_BOOL_MODE (RVVMF8BI, 8, BI, 1);
VECTOR_BOOL_MODE (RVVMF16BI, 4, BI, 1);
VECTOR_BOOL_MODE (RVVMF32BI, 2, BI, 1);
VECTOR_BOOL_MODE (RVVMF64BI, 1, BI, 1);
ADJUST_NUNITS (VNx1BI, riscv_v_adjust_nunits (VNx1BImode, 1));
ADJUST_NUNITS (VNx2BI, riscv_v_adjust_nunits (VNx2BImode, 2));
ADJUST_NUNITS (VNx4BI, riscv_v_adjust_nunits (VNx4BImode, 4));
ADJUST_NUNITS (VNx8BI, riscv_v_adjust_nunits (VNx8BImode, 8));
ADJUST_NUNITS (VNx16BI, riscv_v_adjust_nunits (VNx16BImode, 16));
ADJUST_NUNITS (VNx32BI, riscv_v_adjust_nunits (VNx32BImode, 32));
ADJUST_NUNITS (VNx64BI, riscv_v_adjust_nunits (VNx64BImode, 64));
ADJUST_NUNITS (VNx128BI, riscv_v_adjust_nunits (VNx128BImode, 128));
ADJUST_NUNITS (RVVM1BI, riscv_v_adjust_nunits (RVVM1BImode, 64));
ADJUST_NUNITS (RVVMF2BI, riscv_v_adjust_nunits (RVVMF2BImode, 32));
ADJUST_NUNITS (RVVMF4BI, riscv_v_adjust_nunits (RVVMF4BImode, 16));
ADJUST_NUNITS (RVVMF8BI, riscv_v_adjust_nunits (RVVMF8BImode, 8));
ADJUST_NUNITS (RVVMF16BI, riscv_v_adjust_nunits (RVVMF16BImode, 4));
ADJUST_NUNITS (RVVMF32BI, riscv_v_adjust_nunits (RVVMF32BImode, 2));
ADJUST_NUNITS (RVVMF64BI, riscv_v_adjust_nunits (RVVMF64BImode, 1));
ADJUST_ALIGNMENT (VNx1BI, 1);
ADJUST_ALIGNMENT (VNx2BI, 1);
ADJUST_ALIGNMENT (VNx4BI, 1);
ADJUST_ALIGNMENT (VNx8BI, 1);
ADJUST_ALIGNMENT (VNx16BI, 1);
ADJUST_ALIGNMENT (VNx32BI, 1);
ADJUST_ALIGNMENT (VNx64BI, 1);
ADJUST_ALIGNMENT (VNx128BI, 1);
ADJUST_ALIGNMENT (RVVM1BI, 1);
ADJUST_ALIGNMENT (RVVMF2BI, 1);
ADJUST_ALIGNMENT (RVVMF4BI, 1);
ADJUST_ALIGNMENT (RVVMF8BI, 1);
ADJUST_ALIGNMENT (RVVMF16BI, 1);
ADJUST_ALIGNMENT (RVVMF32BI, 1);
ADJUST_ALIGNMENT (RVVMF64BI, 1);
ADJUST_BYTESIZE (VNx1BI, riscv_v_adjust_bytesize (VNx1BImode, 1));
ADJUST_BYTESIZE (VNx2BI, riscv_v_adjust_bytesize (VNx2BImode, 1));
ADJUST_BYTESIZE (VNx4BI, riscv_v_adjust_bytesize (VNx4BImode, 1));
ADJUST_BYTESIZE (VNx8BI, riscv_v_adjust_bytesize (VNx8BImode, 1));
ADJUST_BYTESIZE (VNx16BI, riscv_v_adjust_bytesize (VNx16BImode, 2));
ADJUST_BYTESIZE (VNx32BI, riscv_v_adjust_bytesize (VNx32BImode, 4));
ADJUST_BYTESIZE (VNx64BI, riscv_v_adjust_bytesize (VNx64BImode, 8));
ADJUST_BYTESIZE (VNx128BI, riscv_v_adjust_bytesize (VNx128BImode, 16));
ADJUST_PRECISION (RVVM1BI, riscv_v_adjust_precision (RVVM1BImode, 64));
ADJUST_PRECISION (RVVMF2BI, riscv_v_adjust_precision (RVVMF2BImode, 32));
ADJUST_PRECISION (RVVMF4BI, riscv_v_adjust_precision (RVVMF4BImode, 16));
ADJUST_PRECISION (RVVMF8BI, riscv_v_adjust_precision (RVVMF8BImode, 8));
ADJUST_PRECISION (RVVMF16BI, riscv_v_adjust_precision (RVVMF16BImode, 4));
ADJUST_PRECISION (RVVMF32BI, riscv_v_adjust_precision (RVVMF32BImode, 2));
ADJUST_PRECISION (RVVMF64BI, riscv_v_adjust_precision (RVVMF64BImode, 1));
ADJUST_PRECISION (VNx1BI, riscv_v_adjust_precision (VNx1BImode, 1));
ADJUST_PRECISION (VNx2BI, riscv_v_adjust_precision (VNx2BImode, 2));
ADJUST_PRECISION (VNx4BI, riscv_v_adjust_precision (VNx4BImode, 4));
ADJUST_PRECISION (VNx8BI, riscv_v_adjust_precision (VNx8BImode, 8));
ADJUST_PRECISION (VNx16BI, riscv_v_adjust_precision (VNx16BImode, 16));
ADJUST_PRECISION (VNx32BI, riscv_v_adjust_precision (VNx32BImode, 32));
ADJUST_PRECISION (VNx64BI, riscv_v_adjust_precision (VNx64BImode, 64));
ADJUST_PRECISION (VNx128BI, riscv_v_adjust_precision (VNx128BImode, 128));
ADJUST_BYTESIZE (RVVM1BI, riscv_v_adjust_bytesize (RVVM1BImode, 8));
ADJUST_BYTESIZE (RVVMF2BI, riscv_v_adjust_bytesize (RVVMF2BImode, 4));
ADJUST_BYTESIZE (RVVMF4BI, riscv_v_adjust_bytesize (RVVMF4BImode, 2));
ADJUST_BYTESIZE (RVVMF8BI, riscv_v_adjust_bytesize (RVVMF8BImode, 1));
ADJUST_BYTESIZE (RVVMF16BI, riscv_v_adjust_bytesize (RVVMF16BImode, 1));
ADJUST_BYTESIZE (RVVMF32BI, riscv_v_adjust_bytesize (RVVMF32BImode, 1));
ADJUST_BYTESIZE (RVVMF64BI, riscv_v_adjust_bytesize (RVVMF64BImode, 1));
/*
| Mode | MIN_VLEN=32 | MIN_VLEN=32 | MIN_VLEN=64 | MIN_VLEN=64 | MIN_VLEN=128 | MIN_VLEN=128 |
| | LMUL | SEW/LMUL | LMUL | SEW/LMUL | LMUL | SEW/LMUL |
| VNx1QI | MF4 | 32 | MF8 | 64 | N/A | N/A |
| VNx2QI | MF2 | 16 | MF4 | 32 | MF8 | 64 |
| VNx4QI | M1 | 8 | MF2 | 16 | MF4 | 32 |
| VNx8QI | M2 | 4 | M1 | 8 | MF2 | 16 |
| VNx16QI | M4 | 2 | M2 | 4 | M1 | 8 |
| VNx32QI | M8 | 1 | M4 | 2 | M2 | 4 |
| VNx64QI | N/A | N/A | M8 | 1 | M4 | 2 |
| VNx128QI | N/A | N/A | N/A | N/A | M8 | 1 |
| VNx1(HI|HF) | MF2 | 32 | MF4 | 64 | N/A | N/A |
| VNx2(HI|HF) | M1 | 16 | MF2 | 32 | MF4 | 64 |
| VNx4(HI|HF) | M2 | 8 | M1 | 16 | MF2 | 32 |
| VNx8(HI|HF) | M4 | 4 | M2 | 8 | M1 | 16 |
| VNx16(HI|HF)| M8 | 2 | M4 | 4 | M2 | 8 |
| VNx32(HI|HF)| N/A | N/A | M8 | 2 | M4 | 4 |
| VNx64(HI|HF)| N/A | N/A | N/A | N/A | M8 | 2 |
| VNx1(SI|SF) | M1 | 32 | MF2 | 64 | MF2 | 64 |
| VNx2(SI|SF) | M2 | 16 | M1 | 32 | M1 | 32 |
| VNx4(SI|SF) | M4 | 8 | M2 | 16 | M2 | 16 |
| VNx8(SI|SF) | M8 | 4 | M4 | 8 | M4 | 8 |
| VNx16(SI|SF)| N/A | N/A | M8 | 4 | M8 | 4 |
| VNx1(DI|DF) | N/A | N/A | M1 | 64 | N/A | N/A |
| VNx2(DI|DF) | N/A | N/A | M2 | 32 | M1 | 64 |
| VNx4(DI|DF) | N/A | N/A | M4 | 16 | M2 | 32 |
| VNx8(DI|DF) | N/A | N/A | M8 | 8 | M4 | 16 |
| VNx16(DI|DF)| N/A | N/A | N/A | N/A | M8 | 8 |
*/
/* Encode SEW and LMUL into data types.
We enforce the constraint LMUL SEW/ELEN in the implementation.
There are the following data types for ELEN = 64.
/* Define RVV modes whose sizes are multiples of 64-bit chunks. */
#define RVV_MODES(NVECS, VB, VH, VS, VD) \
VECTOR_MODE_WITH_PREFIX (VNx, INT, QI, 8 * NVECS, 0); \
VECTOR_MODE_WITH_PREFIX (VNx, INT, HI, 4 * NVECS, 0); \
VECTOR_MODE_WITH_PREFIX (VNx, FLOAT, HF, 4 * NVECS, 0); \
VECTOR_MODE_WITH_PREFIX (VNx, INT, SI, 2 * NVECS, 0); \
VECTOR_MODE_WITH_PREFIX (VNx, FLOAT, SF, 2 * NVECS, 0); \
VECTOR_MODE_WITH_PREFIX (VNx, INT, DI, NVECS, 0); \
VECTOR_MODE_WITH_PREFIX (VNx, FLOAT, DF, NVECS, 0); \
|Modes|LMUL=1 |LMUL=2 |LMUL=4 |LMUL=8 |LMUL=1/2|LMUL=1/4|LMUL=1/8|
|DI |RVVM1DI|RVVM2DI|RVVM4DI|RVVM8DI|N/A |N/A |N/A |
|SI |RVVM1SI|RVVM2SI|RVVM4SI|RVVM8SI|RVVMF2SI|N/A |N/A |
|HI |RVVM1HI|RVVM2HI|RVVM4HI|RVVM8HI|RVVMF2HI|RVVMF4HI|N/A |
|QI |RVVM1QI|RVVM2QI|RVVM4QI|RVVM8QI|RVVMF2QI|RVVMF4QI|RVVMF8QI|
|DF |RVVM1DF|RVVM2DF|RVVM4DF|RVVM8DF|N/A |N/A |N/A |
|SF |RVVM1SF|RVVM2SF|RVVM4SF|RVVM8SF|RVVMF2SF|N/A |N/A |
|HF |RVVM1HF|RVVM2HF|RVVM4HF|RVVM8HF|RVVMF2HF|RVVMF4HF|N/A |
There are the following data types for ELEN = 32.
|Modes|LMUL=1 |LMUL=2 |LMUL=4 |LMUL=8 |LMUL=1/2|LMUL=1/4|LMUL=1/8|
|SI |RVVM1SI|RVVM2SI|RVVM4SI|RVVM8SI|N/A |N/A |N/A |
|HI |RVVM1HI|RVVM2HI|RVVM4HI|RVVM8HI|RVVMF2HI|N/A |N/A |
|QI |RVVM1QI|RVVM2QI|RVVM4QI|RVVM8QI|RVVMF2QI|RVVMF4QI|N/A |
|SF |RVVM1SF|RVVM2SF|RVVM4SF|RVVM8SF|N/A |N/A |N/A |
|HF |RVVM1HF|RVVM2HF|RVVM4HF|RVVM8HF|RVVMF2HF|N/A |N/A | */
#define RVV_WHOLE_MODES(LMUL) \
VECTOR_MODE_WITH_PREFIX (RVVM, INT, QI, LMUL, 0); \
VECTOR_MODE_WITH_PREFIX (RVVM, INT, HI, LMUL, 0); \
VECTOR_MODE_WITH_PREFIX (RVVM, FLOAT, HF, LMUL, 0); \
VECTOR_MODE_WITH_PREFIX (RVVM, INT, SI, LMUL, 0); \
VECTOR_MODE_WITH_PREFIX (RVVM, FLOAT, SF, LMUL, 0); \
VECTOR_MODE_WITH_PREFIX (RVVM, INT, DI, LMUL, 0); \
VECTOR_MODE_WITH_PREFIX (RVVM, FLOAT, DF, LMUL, 0); \
\
ADJUST_NUNITS (VB##QI, riscv_v_adjust_nunits (VB##QI##mode, NVECS * 8)); \
ADJUST_NUNITS (VH##HI, riscv_v_adjust_nunits (VH##HI##mode, NVECS * 4)); \
ADJUST_NUNITS (VS##SI, riscv_v_adjust_nunits (VS##SI##mode, NVECS * 2)); \
ADJUST_NUNITS (VD##DI, riscv_v_adjust_nunits (VD##DI##mode, NVECS)); \
ADJUST_NUNITS (VH##HF, riscv_v_adjust_nunits (VH##HF##mode, NVECS * 4)); \
ADJUST_NUNITS (VS##SF, riscv_v_adjust_nunits (VS##SF##mode, NVECS * 2)); \
ADJUST_NUNITS (VD##DF, riscv_v_adjust_nunits (VD##DF##mode, NVECS)); \
ADJUST_NUNITS (RVVM##LMUL##QI, \
riscv_v_adjust_nunits (RVVM##LMUL##QImode, false, LMUL, 1)); \
ADJUST_NUNITS (RVVM##LMUL##HI, \
riscv_v_adjust_nunits (RVVM##LMUL##HImode, false, LMUL, 1)); \
ADJUST_NUNITS (RVVM##LMUL##SI, \
riscv_v_adjust_nunits (RVVM##LMUL##SImode, false, LMUL, 1)); \
ADJUST_NUNITS (RVVM##LMUL##DI, \
riscv_v_adjust_nunits (RVVM##LMUL##DImode, false, LMUL, 1)); \
ADJUST_NUNITS (RVVM##LMUL##HF, \
riscv_v_adjust_nunits (RVVM##LMUL##HFmode, false, LMUL, 1)); \
ADJUST_NUNITS (RVVM##LMUL##SF, \
riscv_v_adjust_nunits (RVVM##LMUL##SFmode, false, LMUL, 1)); \
ADJUST_NUNITS (RVVM##LMUL##DF, \
riscv_v_adjust_nunits (RVVM##LMUL##DFmode, false, LMUL, 1)); \
\
ADJUST_ALIGNMENT (VB##QI, 1); \
ADJUST_ALIGNMENT (VH##HI, 2); \
ADJUST_ALIGNMENT (VS##SI, 4); \
ADJUST_ALIGNMENT (VD##DI, 8); \
ADJUST_ALIGNMENT (VH##HF, 2); \
ADJUST_ALIGNMENT (VS##SF, 4); \
ADJUST_ALIGNMENT (VD##DF, 8);
ADJUST_ALIGNMENT (RVVM##LMUL##QI, 1); \
ADJUST_ALIGNMENT (RVVM##LMUL##HI, 2); \
ADJUST_ALIGNMENT (RVVM##LMUL##SI, 4); \
ADJUST_ALIGNMENT (RVVM##LMUL##DI, 8); \
ADJUST_ALIGNMENT (RVVM##LMUL##HF, 2); \
ADJUST_ALIGNMENT (RVVM##LMUL##SF, 4); \
ADJUST_ALIGNMENT (RVVM##LMUL##DF, 8);
RVV_MODES (1, VNx8, VNx4, VNx2, VNx1)
RVV_MODES (2, VNx16, VNx8, VNx4, VNx2)
RVV_MODES (4, VNx32, VNx16, VNx8, VNx4)
RVV_MODES (8, VNx64, VNx32, VNx16, VNx8)
RVV_MODES (16, VNx128, VNx64, VNx32, VNx16)
RVV_WHOLE_MODES (1)
RVV_WHOLE_MODES (2)
RVV_WHOLE_MODES (4)
RVV_WHOLE_MODES (8)
VECTOR_MODES_WITH_PREFIX (VNx, INT, 4, 0);
VECTOR_MODES_WITH_PREFIX (VNx, FLOAT, 4, 0);
ADJUST_NUNITS (VNx4QI, riscv_v_adjust_nunits (VNx4QImode, 4));
ADJUST_NUNITS (VNx2HI, riscv_v_adjust_nunits (VNx2HImode, 2));
ADJUST_NUNITS (VNx2HF, riscv_v_adjust_nunits (VNx2HFmode, 2));
ADJUST_ALIGNMENT (VNx4QI, 1);
ADJUST_ALIGNMENT (VNx2HI, 2);
ADJUST_ALIGNMENT (VNx2HF, 2);
/* 'VECTOR_MODES_WITH_PREFIX' does not allow ncomponents < 2.
So we use 'VECTOR_MODE_WITH_PREFIX' to define VNx1SImode and VNx1SFmode. */
VECTOR_MODE_WITH_PREFIX (VNx, INT, SI, 1, 0);
VECTOR_MODE_WITH_PREFIX (VNx, FLOAT, SF, 1, 0);
ADJUST_NUNITS (VNx1SI, riscv_v_adjust_nunits (VNx1SImode, 1));
ADJUST_NUNITS (VNx1SF, riscv_v_adjust_nunits (VNx1SFmode, 1));
ADJUST_ALIGNMENT (VNx1SI, 4);
ADJUST_ALIGNMENT (VNx1SF, 4);
VECTOR_MODES_WITH_PREFIX (VNx, INT, 2, 0);
ADJUST_NUNITS (VNx2QI, riscv_v_adjust_nunits (VNx2QImode, 2));
ADJUST_ALIGNMENT (VNx2QI, 1);
/* 'VECTOR_MODES_WITH_PREFIX' does not allow ncomponents < 2.
So we use 'VECTOR_MODE_WITH_PREFIX' to define VNx1HImode and VNx1HFmode. */
VECTOR_MODE_WITH_PREFIX (VNx, INT, HI, 1, 0);
VECTOR_MODE_WITH_PREFIX (VNx, FLOAT, HF, 1, 0);
ADJUST_NUNITS (VNx1HI, riscv_v_adjust_nunits (VNx1HImode, 1));
ADJUST_NUNITS (VNx1HF, riscv_v_adjust_nunits (VNx1HFmode, 1));
ADJUST_ALIGNMENT (VNx1HI, 2);
ADJUST_ALIGNMENT (VNx1HF, 2);
/* 'VECTOR_MODES_WITH_PREFIX' does not allow ncomponents < 2.
So we use 'VECTOR_MODE_WITH_PREFIX' to define VNx1QImode. */
VECTOR_MODE_WITH_PREFIX (VNx, INT, QI, 1, 0);
ADJUST_NUNITS (VNx1QI, riscv_v_adjust_nunits (VNx1QImode, 1));
ADJUST_ALIGNMENT (VNx1QI, 1);
/* Tuple modes for segment loads/stores according to NF, NF value can be 2 ~ 8. */
/*
| Mode | MIN_VLEN=32 | MIN_VLEN=32 | MIN_VLEN=64 | MIN_VLEN=64 | MIN_VLEN=128 | MIN_VLEN=128 |
| | LMUL | SEW/LMUL | LMUL | SEW/LMUL | LMUL | SEW/LMUL |
| VNxNFx1QI | MF4 | 32 | MF8 | 64 | N/A | N/A |
| VNxNFx2QI | MF2 | 16 | MF4 | 32 | MF8 | 64 |
| VNxNFx4QI | M1 | 8 | MF2 | 16 | MF4 | 32 |
| VNxNFx8QI | M2 | 4 | M1 | 8 | MF2 | 16 |
| VNxNFx16QI | M4 | 2 | M2 | 4 | M1 | 8 |
| VNxNFx32QI | M8 | 1 | M4 | 2 | M2 | 4 |
| VNxNFx64QI | N/A | N/A | M8 | 1 | M4 | 2 |
| VNxNFx128QI | N/A | N/A | N/A | N/A | M8 | 1 |
| VNxNFx1(HI|HF) | MF2 | 32 | MF4 | 64 | N/A | N/A |
| VNxNFx2(HI|HF) | M1 | 16 | MF2 | 32 | MF4 | 64 |
| VNxNFx4(HI|HF) | M2 | 8 | M1 | 16 | MF2 | 32 |
| VNxNFx8(HI|HF) | M4 | 4 | M2 | 8 | M1 | 16 |
| VNxNFx16(HI|HF)| M8 | 2 | M4 | 4 | M2 | 8 |
| VNxNFx32(HI|HF)| N/A | N/A | M8 | 2 | M4 | 4 |
| VNxNFx64(HI|HF)| N/A | N/A | N/A | N/A | M8 | 2 |
| VNxNFx1(SI|SF) | M1 | 32 | MF2 | 64 | MF2 | 64 |
| VNxNFx2(SI|SF) | M2 | 16 | M1 | 32 | M1 | 32 |
| VNxNFx4(SI|SF) | M4 | 8 | M2 | 16 | M2 | 16 |
| VNxNFx8(SI|SF) | M8 | 4 | M4 | 8 | M4 | 8 |
| VNxNFx16(SI|SF)| N/A | N/A | M8 | 4 | M8 | 4 |
| VNxNFx1(DI|DF) | N/A | N/A | M1 | 64 | N/A | N/A |
| VNxNFx2(DI|DF) | N/A | N/A | M2 | 32 | M1 | 64 |
| VNxNFx4(DI|DF) | N/A | N/A | M4 | 16 | M2 | 32 |
| VNxNFx8(DI|DF) | N/A | N/A | M8 | 8 | M4 | 16 |
| VNxNFx16(DI|DF)| N/A | N/A | N/A | N/A | M8 | 8 |
*/
#define RVV_TUPLE_MODES(NBYTES, NSUBPARTS, VB, VH, VS, VD) \
VECTOR_MODE_WITH_PREFIX (VNx##NSUBPARTS##x, INT, QI, NBYTES, 1); \
VECTOR_MODE_WITH_PREFIX (VNx##NSUBPARTS##x, INT, HI, NBYTES / 2, 1); \
VECTOR_MODE_WITH_PREFIX (VNx##NSUBPARTS##x, FLOAT, HF, NBYTES / 2, 1); \
VECTOR_MODE_WITH_PREFIX (VNx##NSUBPARTS##x, INT, SI, NBYTES / 4, 1); \
VECTOR_MODE_WITH_PREFIX (VNx##NSUBPARTS##x, FLOAT, SF, NBYTES / 4, 1); \
VECTOR_MODE_WITH_PREFIX (VNx##NSUBPARTS##x, INT, DI, NBYTES / 8, 1); \
VECTOR_MODE_WITH_PREFIX (VNx##NSUBPARTS##x, FLOAT, DF, NBYTES / 8, 1); \
ADJUST_NUNITS (VNx##NSUBPARTS##x##VB##QI, \
riscv_v_adjust_nunits (VNx##NSUBPARTS##x##VB##QI##mode, \
VB * NSUBPARTS)); \
ADJUST_NUNITS (VNx##NSUBPARTS##x##VH##HI, \
riscv_v_adjust_nunits (VNx##NSUBPARTS##x##VH##HI##mode, \
VH * NSUBPARTS)); \
ADJUST_NUNITS (VNx##NSUBPARTS##x##VS##SI, \
riscv_v_adjust_nunits (VNx##NSUBPARTS##x##VS##SI##mode, \
VS * NSUBPARTS)); \
ADJUST_NUNITS (VNx##NSUBPARTS##x##VD##DI, \
riscv_v_adjust_nunits (VNx##NSUBPARTS##x##VD##DI##mode, \
VD * NSUBPARTS)); \
ADJUST_NUNITS (VNx##NSUBPARTS##x##VH##HF, \
riscv_v_adjust_nunits (VNx##NSUBPARTS##x##VH##HF##mode, \
VH * NSUBPARTS)); \
ADJUST_NUNITS (VNx##NSUBPARTS##x##VS##SF, \
riscv_v_adjust_nunits (VNx##NSUBPARTS##x##VS##SF##mode, \
VS * NSUBPARTS)); \
ADJUST_NUNITS (VNx##NSUBPARTS##x##VD##DF, \
riscv_v_adjust_nunits (VNx##NSUBPARTS##x##VD##DF##mode, \
VD * NSUBPARTS)); \
#define RVV_FRACT_MODE(TYPE, MODE, LMUL, ALIGN) \
VECTOR_MODE_WITH_PREFIX (RVVMF, TYPE, MODE, LMUL, 0); \
ADJUST_NUNITS (RVVMF##LMUL##MODE, \
riscv_v_adjust_nunits (RVVMF##LMUL##MODE##mode, true, LMUL, \
1)); \
\
ADJUST_ALIGNMENT (VNx##NSUBPARTS##x##VB##QI, 1); \
ADJUST_ALIGNMENT (VNx##NSUBPARTS##x##VH##HI, 2); \
ADJUST_ALIGNMENT (VNx##NSUBPARTS##x##VS##SI, 4); \
ADJUST_ALIGNMENT (VNx##NSUBPARTS##x##VD##DI, 8); \
ADJUST_ALIGNMENT (VNx##NSUBPARTS##x##VH##HF, 2); \
ADJUST_ALIGNMENT (VNx##NSUBPARTS##x##VS##SF, 4); \
ADJUST_ALIGNMENT (VNx##NSUBPARTS##x##VD##DF, 8);
ADJUST_ALIGNMENT (RVVMF##LMUL##MODE, ALIGN);
RVV_TUPLE_MODES (8, 2, 8, 4, 2, 1)
RVV_TUPLE_MODES (8, 3, 8, 4, 2, 1)
RVV_TUPLE_MODES (8, 4, 8, 4, 2, 1)
RVV_TUPLE_MODES (8, 5, 8, 4, 2, 1)
RVV_TUPLE_MODES (8, 6, 8, 4, 2, 1)
RVV_TUPLE_MODES (8, 7, 8, 4, 2, 1)
RVV_TUPLE_MODES (8, 8, 8, 4, 2, 1)
RVV_FRACT_MODE (INT, QI, 2, 1)
RVV_FRACT_MODE (INT, QI, 4, 1)
RVV_FRACT_MODE (INT, QI, 8, 1)
RVV_FRACT_MODE (INT, HI, 2, 2)
RVV_FRACT_MODE (INT, HI, 4, 2)
RVV_FRACT_MODE (FLOAT, HF, 2, 2)
RVV_FRACT_MODE (FLOAT, HF, 4, 2)
RVV_FRACT_MODE (INT, SI, 2, 4)
RVV_FRACT_MODE (FLOAT, SF, 2, 4)
RVV_TUPLE_MODES (16, 2, 16, 8, 4, 2)
RVV_TUPLE_MODES (16, 3, 16, 8, 4, 2)
RVV_TUPLE_MODES (16, 4, 16, 8, 4, 2)
RVV_TUPLE_MODES (16, 5, 16, 8, 4, 2)
RVV_TUPLE_MODES (16, 6, 16, 8, 4, 2)
RVV_TUPLE_MODES (16, 7, 16, 8, 4, 2)
RVV_TUPLE_MODES (16, 8, 16, 8, 4, 2)
/* Tuple modes for segment loads/stores according to NF.
RVV_TUPLE_MODES (32, 2, 32, 16, 8, 4)
RVV_TUPLE_MODES (32, 3, 32, 16, 8, 4)
RVV_TUPLE_MODES (32, 4, 32, 16, 8, 4)
Tuple modes format: RVV<LMUL>x<NF><BASEMODE>
RVV_TUPLE_MODES (64, 2, 64, 32, 16, 8)
When LMUL is MF8/MF4/MF2/M1, NF can be 2 ~ 8.
When LMUL is M2, NF can be 2 ~ 4.
When LMUL is M4, NF can be 4. */
#define RVV_TUPLE_PARTIAL_MODES(NSUBPARTS) \
VECTOR_MODE_WITH_PREFIX (VNx##NSUBPARTS##x, INT, QI, 1, 1); \
VECTOR_MODE_WITH_PREFIX (VNx##NSUBPARTS##x, INT, HI, 1, 1); \
VECTOR_MODE_WITH_PREFIX (VNx##NSUBPARTS##x, FLOAT, HF, 1, 1); \
VECTOR_MODE_WITH_PREFIX (VNx##NSUBPARTS##x, INT, SI, 1, 1); \
VECTOR_MODE_WITH_PREFIX (VNx##NSUBPARTS##x, FLOAT, SF, 1, 1); \
VECTOR_MODE_WITH_PREFIX (VNx##NSUBPARTS##x, INT, QI, 2, 1); \
VECTOR_MODE_WITH_PREFIX (VNx##NSUBPARTS##x, INT, HI, 2, 1); \
VECTOR_MODE_WITH_PREFIX (VNx##NSUBPARTS##x, FLOAT, HF, 2, 1); \
VECTOR_MODE_WITH_PREFIX (VNx##NSUBPARTS##x, INT, QI, 4, 1); \
#define RVV_NF8_MODES(NF) \
VECTOR_MODE_WITH_PREFIX (RVVMF8x, INT, QI, NF, 1); \
VECTOR_MODE_WITH_PREFIX (RVVMF4x, INT, QI, NF, 1); \
VECTOR_MODE_WITH_PREFIX (RVVMF2x, INT, QI, NF, 1); \
VECTOR_MODE_WITH_PREFIX (RVVM1x, INT, QI, NF, 1); \
VECTOR_MODE_WITH_PREFIX (RVVMF4x, INT, HI, NF, 1); \
VECTOR_MODE_WITH_PREFIX (RVVMF2x, INT, HI, NF, 1); \
VECTOR_MODE_WITH_PREFIX (RVVM1x, INT, HI, NF, 1); \
VECTOR_MODE_WITH_PREFIX (RVVMF4x, FLOAT, HF, NF, 1); \
VECTOR_MODE_WITH_PREFIX (RVVMF2x, FLOAT, HF, NF, 1); \
VECTOR_MODE_WITH_PREFIX (RVVM1x, FLOAT, HF, NF, 1); \
VECTOR_MODE_WITH_PREFIX (RVVMF2x, INT, SI, NF, 1); \
VECTOR_MODE_WITH_PREFIX (RVVM1x, INT, SI, NF, 1); \
VECTOR_MODE_WITH_PREFIX (RVVMF2x, FLOAT, SF, NF, 1); \
VECTOR_MODE_WITH_PREFIX (RVVM1x, FLOAT, SF, NF, 1); \
VECTOR_MODE_WITH_PREFIX (RVVM1x, INT, DI, NF, 1); \
VECTOR_MODE_WITH_PREFIX (RVVM1x, FLOAT, DF, NF, 1); \
\
ADJUST_NUNITS (VNx##NSUBPARTS##x1QI, \
riscv_v_adjust_nunits (VNx##NSUBPARTS##x1QI##mode, \
NSUBPARTS)); \
ADJUST_NUNITS (VNx##NSUBPARTS##x1HI, \
riscv_v_adjust_nunits (VNx##NSUBPARTS##x1HI##mode, \
NSUBPARTS)); \
ADJUST_NUNITS (VNx##NSUBPARTS##x1HF, \
riscv_v_adjust_nunits (VNx##NSUBPARTS##x1HF##mode, \
NSUBPARTS)); \
ADJUST_NUNITS (VNx##NSUBPARTS##x1SI, \
riscv_v_adjust_nunits (VNx##NSUBPARTS##x1SI##mode, \
NSUBPARTS)); \
ADJUST_NUNITS (VNx##NSUBPARTS##x1SF, \
riscv_v_adjust_nunits (VNx##NSUBPARTS##x1SF##mode, \
NSUBPARTS)); \
ADJUST_NUNITS (VNx##NSUBPARTS##x2QI, \
riscv_v_adjust_nunits (VNx##NSUBPARTS##x2QI##mode, \
2 * NSUBPARTS)); \
ADJUST_NUNITS (VNx##NSUBPARTS##x2HI, \
riscv_v_adjust_nunits (VNx##NSUBPARTS##x2HI##mode, \
2 * NSUBPARTS)); \
ADJUST_NUNITS (VNx##NSUBPARTS##x2HF, \
riscv_v_adjust_nunits (VNx##NSUBPARTS##x2HF##mode, \
2 * NSUBPARTS)); \
ADJUST_NUNITS (VNx##NSUBPARTS##x4QI, \
riscv_v_adjust_nunits (VNx##NSUBPARTS##x4QI##mode, \
4 * NSUBPARTS)); \
ADJUST_ALIGNMENT (VNx##NSUBPARTS##x1QI, 1); \
ADJUST_ALIGNMENT (VNx##NSUBPARTS##x1HI, 2); \
ADJUST_ALIGNMENT (VNx##NSUBPARTS##x1HF, 2); \
ADJUST_ALIGNMENT (VNx##NSUBPARTS##x1SI, 4); \
ADJUST_ALIGNMENT (VNx##NSUBPARTS##x1SF, 4); \
ADJUST_ALIGNMENT (VNx##NSUBPARTS##x2QI, 1); \
ADJUST_ALIGNMENT (VNx##NSUBPARTS##x2HI, 2); \
ADJUST_ALIGNMENT (VNx##NSUBPARTS##x2HF, 2); \
ADJUST_ALIGNMENT (VNx##NSUBPARTS##x4QI, 1);
ADJUST_NUNITS (RVVMF8x##NF##QI, \
riscv_v_adjust_nunits (RVVMF8x##NF##QImode, true, 8, NF)); \
ADJUST_NUNITS (RVVMF4x##NF##QI, \
riscv_v_adjust_nunits (RVVMF4x##NF##QImode, true, 4, NF)); \
ADJUST_NUNITS (RVVMF2x##NF##QI, \
riscv_v_adjust_nunits (RVVMF2x##NF##QImode, true, 2, NF)); \
ADJUST_NUNITS (RVVM1x##NF##QI, \
riscv_v_adjust_nunits (RVVM1x##NF##QImode, false, 1, NF)); \
ADJUST_NUNITS (RVVMF4x##NF##HI, \
riscv_v_adjust_nunits (RVVMF4x##NF##HImode, true, 4, NF)); \
ADJUST_NUNITS (RVVMF2x##NF##HI, \
riscv_v_adjust_nunits (RVVMF2x##NF##HImode, true, 2, NF)); \
ADJUST_NUNITS (RVVM1x##NF##HI, \
riscv_v_adjust_nunits (RVVM1x##NF##HImode, false, 1, NF)); \
ADJUST_NUNITS (RVVMF4x##NF##HF, \
riscv_v_adjust_nunits (RVVMF4x##NF##HFmode, true, 4, NF)); \
ADJUST_NUNITS (RVVMF2x##NF##HF, \
riscv_v_adjust_nunits (RVVMF2x##NF##HFmode, true, 2, NF)); \
ADJUST_NUNITS (RVVM1x##NF##HF, \
riscv_v_adjust_nunits (RVVM1x##NF##HFmode, false, 1, NF)); \
ADJUST_NUNITS (RVVMF2x##NF##SI, \
riscv_v_adjust_nunits (RVVMF2x##NF##SImode, true, 2, NF)); \
ADJUST_NUNITS (RVVM1x##NF##SI, \
riscv_v_adjust_nunits (RVVM1x##NF##SImode, false, 1, NF)); \
ADJUST_NUNITS (RVVMF2x##NF##SF, \
riscv_v_adjust_nunits (RVVMF2x##NF##SFmode, true, 2, NF)); \
ADJUST_NUNITS (RVVM1x##NF##SF, \
riscv_v_adjust_nunits (RVVM1x##NF##SFmode, false, 1, NF)); \
ADJUST_NUNITS (RVVM1x##NF##DI, \
riscv_v_adjust_nunits (RVVM1x##NF##DImode, false, 1, NF)); \
ADJUST_NUNITS (RVVM1x##NF##DF, \
riscv_v_adjust_nunits (RVVM1x##NF##DFmode, false, 1, NF)); \
\
ADJUST_ALIGNMENT (RVVMF8x##NF##QI, 1); \
ADJUST_ALIGNMENT (RVVMF4x##NF##QI, 1); \
ADJUST_ALIGNMENT (RVVMF2x##NF##QI, 1); \
ADJUST_ALIGNMENT (RVVM1x##NF##QI, 1); \
ADJUST_ALIGNMENT (RVVMF4x##NF##HI, 2); \
ADJUST_ALIGNMENT (RVVMF2x##NF##HI, 2); \
ADJUST_ALIGNMENT (RVVM1x##NF##HI, 2); \
ADJUST_ALIGNMENT (RVVMF4x##NF##HF, 2); \
ADJUST_ALIGNMENT (RVVMF2x##NF##HF, 2); \
ADJUST_ALIGNMENT (RVVM1x##NF##HF, 2); \
ADJUST_ALIGNMENT (RVVMF2x##NF##SI, 4); \
ADJUST_ALIGNMENT (RVVM1x##NF##SI, 4); \
ADJUST_ALIGNMENT (RVVMF2x##NF##SF, 4); \
ADJUST_ALIGNMENT (RVVM1x##NF##SF, 4); \
ADJUST_ALIGNMENT (RVVM1x##NF##DI, 8); \
ADJUST_ALIGNMENT (RVVM1x##NF##DF, 8);
RVV_TUPLE_PARTIAL_MODES (2)
RVV_TUPLE_PARTIAL_MODES (3)
RVV_TUPLE_PARTIAL_MODES (4)
RVV_TUPLE_PARTIAL_MODES (5)
RVV_TUPLE_PARTIAL_MODES (6)
RVV_TUPLE_PARTIAL_MODES (7)
RVV_TUPLE_PARTIAL_MODES (8)
RVV_NF8_MODES (8)
RVV_NF8_MODES (7)
RVV_NF8_MODES (6)
RVV_NF8_MODES (5)
RVV_NF8_MODES (4)
RVV_NF8_MODES (3)
RVV_NF8_MODES (2)
#define RVV_NF4_MODES(NF) \
VECTOR_MODE_WITH_PREFIX (RVVM2x, INT, QI, NF, 1); \
VECTOR_MODE_WITH_PREFIX (RVVM2x, INT, HI, NF, 1); \
VECTOR_MODE_WITH_PREFIX (RVVM2x, FLOAT, HF, NF, 1); \
VECTOR_MODE_WITH_PREFIX (RVVM2x, INT, SI, NF, 1); \
VECTOR_MODE_WITH_PREFIX (RVVM2x, FLOAT, SF, NF, 1); \
VECTOR_MODE_WITH_PREFIX (RVVM2x, INT, DI, NF, 1); \
VECTOR_MODE_WITH_PREFIX (RVVM2x, FLOAT, DF, NF, 1); \
\
ADJUST_NUNITS (RVVM2x##NF##QI, \
riscv_v_adjust_nunits (RVVM2x##NF##QImode, false, 2, NF)); \
ADJUST_NUNITS (RVVM2x##NF##HI, \
riscv_v_adjust_nunits (RVVM2x##NF##HImode, false, 2, NF)); \
ADJUST_NUNITS (RVVM2x##NF##HF, \
riscv_v_adjust_nunits (RVVM2x##NF##HFmode, false, 2, NF)); \
ADJUST_NUNITS (RVVM2x##NF##SI, \
riscv_v_adjust_nunits (RVVM2x##NF##SImode, false, 2, NF)); \
ADJUST_NUNITS (RVVM2x##NF##SF, \
riscv_v_adjust_nunits (RVVM2x##NF##SFmode, false, 2, NF)); \
ADJUST_NUNITS (RVVM2x##NF##DI, \
riscv_v_adjust_nunits (RVVM2x##NF##DImode, false, 2, NF)); \
ADJUST_NUNITS (RVVM2x##NF##DF, \
riscv_v_adjust_nunits (RVVM2x##NF##DFmode, false, 2, NF)); \
\
ADJUST_ALIGNMENT (RVVM2x##NF##QI, 1); \
ADJUST_ALIGNMENT (RVVM2x##NF##HI, 2); \
ADJUST_ALIGNMENT (RVVM2x##NF##HF, 2); \
ADJUST_ALIGNMENT (RVVM2x##NF##SI, 4); \
ADJUST_ALIGNMENT (RVVM2x##NF##SF, 4); \
ADJUST_ALIGNMENT (RVVM2x##NF##DI, 8); \
ADJUST_ALIGNMENT (RVVM2x##NF##DF, 8);
RVV_NF4_MODES (2)
RVV_NF4_MODES (3)
RVV_NF4_MODES (4)
#define RVV_NF2_MODES(NF) \
VECTOR_MODE_WITH_PREFIX (RVVM4x, INT, QI, NF, 1); \
VECTOR_MODE_WITH_PREFIX (RVVM4x, INT, HI, NF, 1); \
VECTOR_MODE_WITH_PREFIX (RVVM4x, FLOAT, HF, NF, 1); \
VECTOR_MODE_WITH_PREFIX (RVVM4x, INT, SI, NF, 1); \
VECTOR_MODE_WITH_PREFIX (RVVM4x, FLOAT, SF, NF, 1); \
VECTOR_MODE_WITH_PREFIX (RVVM4x, INT, DI, NF, 1); \
VECTOR_MODE_WITH_PREFIX (RVVM4x, FLOAT, DF, NF, 1); \
\
ADJUST_NUNITS (RVVM4x##NF##QI, \
riscv_v_adjust_nunits (RVVM4x##NF##QImode, false, 4, NF)); \
ADJUST_NUNITS (RVVM4x##NF##HI, \
riscv_v_adjust_nunits (RVVM4x##NF##HImode, false, 4, NF)); \
ADJUST_NUNITS (RVVM4x##NF##HF, \
riscv_v_adjust_nunits (RVVM4x##NF##HFmode, false, 4, NF)); \
ADJUST_NUNITS (RVVM4x##NF##SI, \
riscv_v_adjust_nunits (RVVM4x##NF##SImode, false, 4, NF)); \
ADJUST_NUNITS (RVVM4x##NF##SF, \
riscv_v_adjust_nunits (RVVM4x##NF##SFmode, false, 4, NF)); \
ADJUST_NUNITS (RVVM4x##NF##DI, \
riscv_v_adjust_nunits (RVVM4x##NF##DImode, false, 4, NF)); \
ADJUST_NUNITS (RVVM4x##NF##DF, \
riscv_v_adjust_nunits (RVVM4x##NF##DFmode, false, 4, NF)); \
\
ADJUST_ALIGNMENT (RVVM4x##NF##QI, 1); \
ADJUST_ALIGNMENT (RVVM4x##NF##HI, 2); \
ADJUST_ALIGNMENT (RVVM4x##NF##HF, 2); \
ADJUST_ALIGNMENT (RVVM4x##NF##SI, 4); \
ADJUST_ALIGNMENT (RVVM4x##NF##SF, 4); \
ADJUST_ALIGNMENT (RVVM4x##NF##DI, 8); \
ADJUST_ALIGNMENT (RVVM4x##NF##DF, 8);
RVV_NF2_MODES (2)
/* TODO: According to RISC-V 'V' ISA spec, the maximun vector length can
be 65536 for a single vector register which means the vector mode in

View file

@ -1550,37 +1550,20 @@ legitimize_move (rtx dest, rtx src)
/* VTYPE information for machine_mode. */
struct mode_vtype_group
{
enum vlmul_type vlmul_for_min_vlen32[NUM_MACHINE_MODES];
uint8_t ratio_for_min_vlen32[NUM_MACHINE_MODES];
enum vlmul_type vlmul_for_min_vlen64[NUM_MACHINE_MODES];
uint8_t ratio_for_min_vlen64[NUM_MACHINE_MODES];
enum vlmul_type vlmul_for_for_vlen128[NUM_MACHINE_MODES];
uint8_t ratio_for_for_vlen128[NUM_MACHINE_MODES];
enum vlmul_type vlmul[NUM_MACHINE_MODES];
uint8_t ratio[NUM_MACHINE_MODES];
machine_mode subpart_mode[NUM_MACHINE_MODES];
uint8_t nf[NUM_MACHINE_MODES];
mode_vtype_group ()
{
#define ENTRY(MODE, REQUIREMENT, VLMUL_FOR_MIN_VLEN32, RATIO_FOR_MIN_VLEN32, \
VLMUL_FOR_MIN_VLEN64, RATIO_FOR_MIN_VLEN64, \
VLMUL_FOR_MIN_VLEN128, RATIO_FOR_MIN_VLEN128) \
vlmul_for_min_vlen32[MODE##mode] = VLMUL_FOR_MIN_VLEN32; \
ratio_for_min_vlen32[MODE##mode] = RATIO_FOR_MIN_VLEN32; \
vlmul_for_min_vlen64[MODE##mode] = VLMUL_FOR_MIN_VLEN64; \
ratio_for_min_vlen64[MODE##mode] = RATIO_FOR_MIN_VLEN64; \
vlmul_for_for_vlen128[MODE##mode] = VLMUL_FOR_MIN_VLEN128; \
ratio_for_for_vlen128[MODE##mode] = RATIO_FOR_MIN_VLEN128;
#define TUPLE_ENTRY(MODE, REQUIREMENT, SUBPART_MODE, NF, VLMUL_FOR_MIN_VLEN32, \
RATIO_FOR_MIN_VLEN32, VLMUL_FOR_MIN_VLEN64, \
RATIO_FOR_MIN_VLEN64, VLMUL_FOR_MIN_VLEN128, \
RATIO_FOR_MIN_VLEN128) \
#define ENTRY(MODE, REQUIREMENT, VLMUL, RATIO) \
vlmul[MODE##mode] = VLMUL; \
ratio[MODE##mode] = RATIO;
#define TUPLE_ENTRY(MODE, REQUIREMENT, SUBPART_MODE, NF, VLMUL, RATIO) \
subpart_mode[MODE##mode] = SUBPART_MODE##mode; \
nf[MODE##mode] = NF; \
vlmul_for_min_vlen32[MODE##mode] = VLMUL_FOR_MIN_VLEN32; \
ratio_for_min_vlen32[MODE##mode] = RATIO_FOR_MIN_VLEN32; \
vlmul_for_min_vlen64[MODE##mode] = VLMUL_FOR_MIN_VLEN64; \
ratio_for_min_vlen64[MODE##mode] = RATIO_FOR_MIN_VLEN64; \
vlmul_for_for_vlen128[MODE##mode] = VLMUL_FOR_MIN_VLEN128; \
ratio_for_for_vlen128[MODE##mode] = RATIO_FOR_MIN_VLEN128;
vlmul[MODE##mode] = VLMUL; \
ratio[MODE##mode] = RATIO;
#include "riscv-vector-switch.def"
#undef ENTRY
#undef TUPLE_ENTRY
@ -1593,12 +1576,7 @@ static mode_vtype_group mode_vtype_infos;
enum vlmul_type
get_vlmul (machine_mode mode)
{
if (TARGET_MIN_VLEN >= 128)
return mode_vtype_infos.vlmul_for_for_vlen128[mode];
else if (TARGET_MIN_VLEN == 32)
return mode_vtype_infos.vlmul_for_min_vlen32[mode];
else
return mode_vtype_infos.vlmul_for_min_vlen64[mode];
return mode_vtype_infos.vlmul[mode];
}
/* Return the NF value of the corresponding mode. */
@ -1610,8 +1588,8 @@ get_nf (machine_mode mode)
return mode_vtype_infos.nf[mode];
}
/* Return the subpart mode of the tuple mode. For VNx2x1SImode,
the subpart mode is VNx1SImode. This will help to build
/* Return the subpart mode of the tuple mode. For RVVM2x2SImode,
the subpart mode is RVVM2SImode. This will help to build
array/struct type in builtins. */
machine_mode
get_subpart_mode (machine_mode mode)
@ -1625,12 +1603,7 @@ get_subpart_mode (machine_mode mode)
unsigned int
get_ratio (machine_mode mode)
{
if (TARGET_MIN_VLEN >= 128)
return mode_vtype_infos.ratio_for_for_vlen128[mode];
else if (TARGET_MIN_VLEN == 32)
return mode_vtype_infos.ratio_for_min_vlen32[mode];
else
return mode_vtype_infos.ratio_for_min_vlen64[mode];
return mode_vtype_infos.ratio[mode];
}
/* Get ta according to operand[tail_op_idx]. */
@ -2171,12 +2144,12 @@ preferred_simd_mode (scalar_mode mode)
/* We will disable auto-vectorization when TARGET_MIN_VLEN < 128 &&
riscv_autovec_lmul < RVV_M2. Since GCC loop vectorizer report ICE when we
enable -march=rv64gc_zve32* and -march=rv32gc_zve64*. in the
'can_duplicate_and_interleave_p' of tree-vect-slp.cc. Since we have
VNx1SImode in -march=*zve32* and VNx1DImode in -march=*zve64*, they are
enabled in targetm. vector_mode_supported_p and SLP vectorizer will try to
use them. Currently, we can support auto-vectorization in
-march=rv32_zve32x_zvl128b. Wheras, -march=rv32_zve32x_zvl32b or
-march=rv32_zve32x_zvl64b are disabled. */
'can_duplicate_and_interleave_p' of tree-vect-slp.cc. Since both
RVVM1SImode in -march=*zve32*_zvl32b and RVVM1DImode in
-march=*zve64*_zvl64b are NUNITS = poly (1, 1), they will cause ICE in loop
vectorizer when we enable them in this target hook. Currently, we can
support auto-vectorization in -march=rv32_zve32x_zvl128b. Wheras,
-march=rv32_zve32x_zvl32b or -march=rv32_zve32x_zvl64b are disabled. */
if (autovec_use_vlmax_p ())
{
if (TARGET_MIN_VLEN < 128 && riscv_autovec_lmul < RVV_M2)
@ -2371,9 +2344,9 @@ autovectorize_vector_modes (vector_modes *modes, bool)
poly_uint64 full_size
= BYTES_PER_RISCV_VECTOR * ((int) riscv_autovec_lmul);
/* Start with a VNxYYQImode where YY is the number of units that
/* Start with a RVV<LMUL>QImode where LMUL is the number of units that
fit a whole vector.
Then try YY = nunits / 2, nunits / 4 and nunits / 8 which
Then try LMUL = nunits / 2, nunits / 4 and nunits / 8 which
is guided by the extensions we have available (vf2, vf4 and vf8).
- full_size: Try using full vectors for all element types.

View file

@ -109,10 +109,8 @@ const char *const operand_suffixes[NUM_OP_TYPES] = {
/* Static information about type suffix for each RVV type. */
const rvv_builtin_suffixes type_suffixes[NUM_VECTOR_TYPES + 1] = {
#define DEF_RVV_TYPE(NAME, NCHARS, ABI_NAME, SCALAR_TYPE, \
VECTOR_MODE_MIN_VLEN_128, VECTOR_MODE_MIN_VLEN_64, \
VECTOR_MODE_MIN_VLEN_32, VECTOR_SUFFIX, SCALAR_SUFFIX, \
VSETVL_SUFFIX) \
#define DEF_RVV_TYPE(NAME, NCHARS, ABI_NAME, SCALAR_TYPE, VECTOR_MODE, \
VECTOR_SUFFIX, SCALAR_SUFFIX, VSETVL_SUFFIX) \
{#VECTOR_SUFFIX, #SCALAR_SUFFIX, #VSETVL_SUFFIX},
#define DEF_RVV_TUPLE_TYPE(NAME, NCHARS, ABI_NAME, SUBPART_TYPE, SCALAR_TYPE, \
NF, VECTOR_SUFFIX) \
@ -2802,12 +2800,9 @@ register_builtin_types ()
tree int64_type_node = get_typenode_from_name (INT64_TYPE);
machine_mode mode;
#define DEF_RVV_TYPE(NAME, NCHARS, ABI_NAME, SCALAR_TYPE, \
VECTOR_MODE_MIN_VLEN_128, VECTOR_MODE_MIN_VLEN_64, \
VECTOR_MODE_MIN_VLEN_32, ARGS...) \
mode = TARGET_MIN_VLEN >= 128 ? VECTOR_MODE_MIN_VLEN_128##mode \
: TARGET_MIN_VLEN >= 64 ? VECTOR_MODE_MIN_VLEN_64##mode \
: VECTOR_MODE_MIN_VLEN_32##mode; \
#define DEF_RVV_TYPE(NAME, NCHARS, ABI_NAME, SCALAR_TYPE, VECTOR_MODE, \
ARGS...) \
mode = VECTOR_MODE##mode; \
register_builtin_type (VECTOR_TYPE_##NAME, SCALAR_TYPE##_type_node, mode);
#define DEF_RVV_TUPLE_TYPE(NAME, NCHARS, ABI_NAME, SUBPART_TYPE, SCALAR_TYPE, \
NF, VECTOR_SUFFIX) \

View file

@ -28,24 +28,19 @@ along with GCC; see the file COPYING3. If not see
"build_vector_type_for_mode". For "vint32m1_t", we use "intSI_type_node" in
RV64. Otherwise, we use "long_integer_type_node".
5.The 'VECTOR_MODE' is the machine modes of corresponding RVV type used
in "build_vector_type_for_mode" when TARGET_MIN_VLEN > 32.
For example: VECTOR_MODE = VNx2SI for "vint32m1_t".
6.The 'VECTOR_MODE_MIN_VLEN_32' is the machine modes of corresponding RVV
type used in "build_vector_type_for_mode" when TARGET_MIN_VLEN = 32. For
example: VECTOR_MODE_MIN_VLEN_32 = VNx1SI for "vint32m1_t".
7.The 'VECTOR_SUFFIX' define mode suffix for vector type.
in "build_vector_type_for_mode".
For example: VECTOR_MODE = RVVM1SImode for "vint32m1_t".
6.The 'VECTOR_SUFFIX' define mode suffix for vector type.
For example: type_suffixes[VECTOR_TYPE_vin32m1_t].vector = i32m1.
8.The 'SCALAR_SUFFIX' define mode suffix for scalar type.
7.The 'SCALAR_SUFFIX' define mode suffix for scalar type.
For example: type_suffixes[VECTOR_TYPE_vin32m1_t].scalar = i32.
9.The 'VSETVL_SUFFIX' define mode suffix for vsetvli instruction.
8.The 'VSETVL_SUFFIX' define mode suffix for vsetvli instruction.
For example: type_suffixes[VECTOR_TYPE_vin32m1_t].vsetvl = e32m1.
*/
#ifndef DEF_RVV_TYPE
#define DEF_RVV_TYPE(NAME, NCHARS, ABI_NAME, SCALAR_TYPE, \
VECTOR_MODE_MIN_VLEN_128, VECTOR_MODE_MIN_VLEN_64, \
VECTOR_MODE_MIN_VLEN_32, VECTOR_SUFFIX, SCALAR_SUFFIX, \
VSETVL_SUFFIX)
#define DEF_RVV_TYPE(NAME, NCHARS, ABI_NAME, SCALAR_TYPE, VECTOR_MODE, \
VECTOR_SUFFIX, SCALAR_SUFFIX, VSETVL_SUFFIX)
#endif
#ifndef DEF_RVV_TUPLE_TYPE
@ -101,47 +96,34 @@ along with GCC; see the file COPYING3. If not see
/* SEW/LMUL = 64:
Only enable when TARGET_MIN_VLEN > 32.
Machine mode = VNx1BImode when TARGET_MIN_VLEN < 128.
Machine mode = VNx2BImode when TARGET_MIN_VLEN >= 128. */
DEF_RVV_TYPE (vbool64_t, 14, __rvv_bool64_t, boolean, VNx2BI, VNx1BI, VOID, _b64, , )
Machine mode = RVVMF64BImode. */
DEF_RVV_TYPE (vbool64_t, 14, __rvv_bool64_t, boolean, RVVMF64BI, _b64, , )
/* SEW/LMUL = 32:
Machine mode = VNx2BImode when TARGET_MIN_VLEN > 32.
Machine mode = VNx1BImode when TARGET_MIN_VLEN = 32. */
DEF_RVV_TYPE (vbool32_t, 14, __rvv_bool32_t, boolean, VNx4BI, VNx2BI, VNx1BI, _b32, , )
Machine mode = RVVMF32BImode. */
DEF_RVV_TYPE (vbool32_t, 14, __rvv_bool32_t, boolean, RVVMF32BI, _b32, , )
/* SEW/LMUL = 16:
Machine mode = VNx8BImode when TARGET_MIN_VLEN >= 128.
Machine mode = VNx2BImode when TARGET_MIN_VLEN = 32.
Machine mode = VNx4BImode when TARGET_MIN_VLEN > 32. */
DEF_RVV_TYPE (vbool16_t, 14, __rvv_bool16_t, boolean, VNx8BI, VNx4BI, VNx2BI, _b16, , )
Machine mode = RVVMF16BImode. */
DEF_RVV_TYPE (vbool16_t, 14, __rvv_bool16_t, boolean, RVVMF16BI, _b16, , )
/* SEW/LMUL = 8:
Machine mode = VNx16BImode when TARGET_MIN_VLEN >= 128.
Machine mode = VNx8BImode when TARGET_MIN_VLEN > 32.
Machine mode = VNx4BImode when TARGET_MIN_VLEN = 32. */
DEF_RVV_TYPE (vbool8_t, 13, __rvv_bool8_t, boolean, VNx16BI, VNx8BI, VNx4BI, _b8, , )
Machine mode = RVVMF8BImode. */
DEF_RVV_TYPE (vbool8_t, 13, __rvv_bool8_t, boolean, RVVMF8BI, _b8, , )
/* SEW/LMUL = 4:
Machine mode = VNx32BImode when TARGET_MIN_VLEN >= 128.
Machine mode = VNx16BImode when TARGET_MIN_VLEN > 32.
Machine mode = VNx8BImode when TARGET_MIN_VLEN = 32. */
DEF_RVV_TYPE (vbool4_t, 13, __rvv_bool4_t, boolean, VNx32BI, VNx16BI, VNx8BI, _b4, , )
Machine mode = RVVMF4BImode. */
DEF_RVV_TYPE (vbool4_t, 13, __rvv_bool4_t, boolean, RVVMF4BI, _b4, , )
/* SEW/LMUL = 2:
Machine mode = VNx64BImode when TARGET_MIN_VLEN >= 128.
Machine mode = VNx32BImode when TARGET_MIN_VLEN > 32.
Machine mode = VNx16BImode when TARGET_MIN_VLEN = 32. */
DEF_RVV_TYPE (vbool2_t, 13, __rvv_bool2_t, boolean, VNx64BI, VNx32BI, VNx16BI, _b2, , )
Machine mode = RVVMF2BImode. */
DEF_RVV_TYPE (vbool2_t, 13, __rvv_bool2_t, boolean, RVVMF2BI, _b2, , )
/* SEW/LMUL = 1:
Machine mode = VNx128BImode when TARGET_MIN_VLEN >= 128.
Machine mode = VNx64BImode when TARGET_MIN_VLEN > 32.
Machine mode = VNx32BImode when TARGET_MIN_VLEN = 32. */
DEF_RVV_TYPE (vbool1_t, 13, __rvv_bool1_t, boolean, VNx128BI, VNx64BI, VNx32BI, _b1, , )
Machine mode = RVVM1BImode. */
DEF_RVV_TYPE (vbool1_t, 13, __rvv_bool1_t, boolean, RVVM1BI, _b1, , )
/* LMUL = 1/8:
Only enble when TARGET_MIN_VLEN > 32.
Machine mode = VNx1QImode when TARGET_MIN_VLEN < 128.
Machine mode = VNx2QImode when TARGET_MIN_VLEN >= 128. */
DEF_RVV_TYPE (vint8mf8_t, 15, __rvv_int8mf8_t, int8, VNx2QI, VNx1QI, VOID, _i8mf8, _i8,
Machine mode = RVVMF8QImode. */
DEF_RVV_TYPE (vint8mf8_t, 15, __rvv_int8mf8_t, int8, RVVMF8QI, _i8mf8, _i8,
_e8mf8)
DEF_RVV_TYPE (vuint8mf8_t, 16, __rvv_uint8mf8_t, uint8, RVVMF8QI, _u8mf8, _u8,
_e8mf8)
DEF_RVV_TYPE (vuint8mf8_t, 16, __rvv_uint8mf8_t, uint8, VNx2QI, VNx1QI, VOID, _u8mf8,
_u8, _e8mf8)
/* Define tuple types for SEW = 8, LMUL = MF8. */
DEF_RVV_TUPLE_TYPE (vint8mf8x2_t, 17, __rvv_int8mf8x2_t, vint8mf8_t, int8, 2, _i8mf8x2)
DEF_RVV_TUPLE_TYPE (vuint8mf8x2_t, 18, __rvv_uint8mf8x2_t, vuint8mf8_t, uint8, 2, _u8mf8x2)
@ -158,13 +140,11 @@ DEF_RVV_TUPLE_TYPE (vuint8mf8x7_t, 18, __rvv_uint8mf8x7_t, vuint8mf8_t, uint8, 7
DEF_RVV_TUPLE_TYPE (vint8mf8x8_t, 17, __rvv_int8mf8x8_t, vint8mf8_t, int8, 8, _i8mf8x8)
DEF_RVV_TUPLE_TYPE (vuint8mf8x8_t, 18, __rvv_uint8mf8x8_t, vuint8mf8_t, uint8, 8, _u8mf8x8)
/* LMUL = 1/4:
Machine mode = VNx4QImode when TARGET_MIN_VLEN >= 128.
Machine mode = VNx2QImode when TARGET_MIN_VLEN > 32.
Machine mode = VNx1QImode when TARGET_MIN_VLEN = 32. */
DEF_RVV_TYPE (vint8mf4_t, 15, __rvv_int8mf4_t, int8, VNx4QI, VNx2QI, VNx1QI, _i8mf4,
_i8, _e8mf4)
DEF_RVV_TYPE (vuint8mf4_t, 16, __rvv_uint8mf4_t, uint8, VNx4QI, VNx2QI, VNx1QI, _u8mf4,
_u8, _e8mf4)
Machine mode = RVVMF4QImode. */
DEF_RVV_TYPE (vint8mf4_t, 15, __rvv_int8mf4_t, int8, RVVMF4QI, _i8mf4, _i8,
_e8mf4)
DEF_RVV_TYPE (vuint8mf4_t, 16, __rvv_uint8mf4_t, uint8, RVVMF4QI, _u8mf4, _u8,
_e8mf4)
/* Define tuple types for SEW = 8, LMUL = MF4. */
DEF_RVV_TUPLE_TYPE (vint8mf4x2_t, 17, __rvv_int8mf4x2_t, vint8mf4_t, int8, 2, _i8mf4x2)
DEF_RVV_TUPLE_TYPE (vuint8mf4x2_t, 18, __rvv_uint8mf4x2_t, vuint8mf4_t, uint8, 2, _u8mf4x2)
@ -181,13 +161,11 @@ DEF_RVV_TUPLE_TYPE (vuint8mf4x7_t, 18, __rvv_uint8mf4x7_t, vuint8mf4_t, uint8, 7
DEF_RVV_TUPLE_TYPE (vint8mf4x8_t, 17, __rvv_int8mf4x8_t, vint8mf4_t, int8, 8, _i8mf4x8)
DEF_RVV_TUPLE_TYPE (vuint8mf4x8_t, 18, __rvv_uint8mf4x8_t, vuint8mf4_t, uint8, 8, _u8mf4x8)
/* LMUL = 1/2:
Machine mode = VNx8QImode when TARGET_MIN_VLEN >= 128.
Machine mode = VNx4QImode when TARGET_MIN_VLEN > 32.
Machine mode = VNx2QImode when TARGET_MIN_VLEN = 32. */
DEF_RVV_TYPE (vint8mf2_t, 15, __rvv_int8mf2_t, int8, VNx8QI, VNx4QI, VNx2QI, _i8mf2,
_i8, _e8mf2)
DEF_RVV_TYPE (vuint8mf2_t, 16, __rvv_uint8mf2_t, uint8, VNx8QI, VNx4QI, VNx2QI, _u8mf2,
_u8, _e8mf2)
Machine mode = RVVMF2QImode. */
DEF_RVV_TYPE (vint8mf2_t, 15, __rvv_int8mf2_t, int8, RVVMF2QI, _i8mf2, _i8,
_e8mf2)
DEF_RVV_TYPE (vuint8mf2_t, 16, __rvv_uint8mf2_t, uint8, RVVMF2QI, _u8mf2, _u8,
_e8mf2)
/* Define tuple types for SEW = 8, LMUL = MF2. */
DEF_RVV_TUPLE_TYPE (vint8mf2x2_t, 17, __rvv_int8mf2x2_t, vint8mf2_t, int8, 2, _i8mf2x2)
DEF_RVV_TUPLE_TYPE (vuint8mf2x2_t, 18, __rvv_uint8mf2x2_t, vuint8mf2_t, uint8, 2, _u8mf2x2)
@ -204,13 +182,10 @@ DEF_RVV_TUPLE_TYPE (vuint8mf2x7_t, 18, __rvv_uint8mf2x7_t, vuint8mf2_t, uint8, 7
DEF_RVV_TUPLE_TYPE (vint8mf2x8_t, 17, __rvv_int8mf2x8_t, vint8mf2_t, int8, 8, _i8mf2x8)
DEF_RVV_TUPLE_TYPE (vuint8mf2x8_t, 18, __rvv_uint8mf2x8_t, vuint8mf2_t, uint8, 8, _u8mf2x8)
/* LMUL = 1:
Machine mode = VNx16QImode when TARGET_MIN_VLEN >= 128.
Machine mode = VNx8QImode when TARGET_MIN_VLEN > 32.
Machine mode = VNx4QImode when TARGET_MIN_VLEN = 32. */
DEF_RVV_TYPE (vint8m1_t, 14, __rvv_int8m1_t, int8, VNx16QI, VNx8QI, VNx4QI, _i8m1, _i8,
Machine mode = RVVM1QImode. */
DEF_RVV_TYPE (vint8m1_t, 14, __rvv_int8m1_t, int8, RVVM1QI, _i8m1, _i8, _e8m1)
DEF_RVV_TYPE (vuint8m1_t, 15, __rvv_uint8m1_t, uint8, RVVM1QI, _u8m1, _u8,
_e8m1)
DEF_RVV_TYPE (vuint8m1_t, 15, __rvv_uint8m1_t, uint8, VNx16QI, VNx8QI, VNx4QI, _u8m1,
_u8, _e8m1)
/* Define tuple types for SEW = 8, LMUL = M1. */
DEF_RVV_TUPLE_TYPE (vint8m1x2_t, 16, __rvv_int8m1x2_t, vint8m1_t, int8, 2, _i8m1x2)
DEF_RVV_TUPLE_TYPE (vuint8m1x2_t, 17, __rvv_uint8m1x2_t, vuint8m1_t, uint8, 2, _u8m1x2)
@ -227,13 +202,10 @@ DEF_RVV_TUPLE_TYPE (vuint8m1x7_t, 17, __rvv_uint8m1x7_t, vuint8m1_t, uint8, 7, _
DEF_RVV_TUPLE_TYPE (vint8m1x8_t, 16, __rvv_int8m1x8_t, vint8m1_t, int8, 8, _i8m1x8)
DEF_RVV_TUPLE_TYPE (vuint8m1x8_t, 17, __rvv_uint8m1x8_t, vuint8m1_t, uint8, 8, _u8m1x8)
/* LMUL = 2:
Machine mode = VNx32QImode when TARGET_MIN_VLEN >= 128.
Machine mode = VNx16QImode when TARGET_MIN_VLEN > 32.
Machine mode = VNx8QImode when TARGET_MIN_VLEN = 32. */
DEF_RVV_TYPE (vint8m2_t, 14, __rvv_int8m2_t, int8, VNx32QI, VNx16QI, VNx8QI, _i8m2, _i8,
Machine mode = RVVM2QImode. */
DEF_RVV_TYPE (vint8m2_t, 14, __rvv_int8m2_t, int8, RVVM2QI, _i8m2, _i8, _e8m2)
DEF_RVV_TYPE (vuint8m2_t, 15, __rvv_uint8m2_t, uint8, RVVM2QI, _u8m2, _u8,
_e8m2)
DEF_RVV_TYPE (vuint8m2_t, 15, __rvv_uint8m2_t, uint8, VNx32QI, VNx16QI, VNx8QI, _u8m2,
_u8, _e8m2)
/* Define tuple types for SEW = 8, LMUL = M2. */
DEF_RVV_TUPLE_TYPE (vint8m2x2_t, 16, __rvv_int8m2x2_t, vint8m2_t, int8, 2, _i8m2x2)
DEF_RVV_TUPLE_TYPE (vuint8m2x2_t, 17, __rvv_uint8m2x2_t, vuint8m2_t, uint8, 2, _u8m2x2)
@ -242,33 +214,26 @@ DEF_RVV_TUPLE_TYPE (vuint8m2x3_t, 17, __rvv_uint8m2x3_t, vuint8m2_t, uint8, 3, _
DEF_RVV_TUPLE_TYPE (vint8m2x4_t, 16, __rvv_int8m2x4_t, vint8m2_t, int8, 4, _i8m2x4)
DEF_RVV_TUPLE_TYPE (vuint8m2x4_t, 17, __rvv_uint8m2x4_t, vuint8m2_t, uint8, 4, _u8m2x4)
/* LMUL = 4:
Machine mode = VNx64QImode when TARGET_MIN_VLEN >= 128.
Machine mode = VNx32QImode when TARGET_MIN_VLEN > 32.
Machine mode = VNx16QImode when TARGET_MIN_VLEN = 32. */
DEF_RVV_TYPE (vint8m4_t, 14, __rvv_int8m4_t, int8, VNx64QI, VNx32QI, VNx16QI, _i8m4, _i8,
Machine mode = RVVM4QImode. */
DEF_RVV_TYPE (vint8m4_t, 14, __rvv_int8m4_t, int8, RVVM4QI, _i8m4, _i8, _e8m4)
DEF_RVV_TYPE (vuint8m4_t, 15, __rvv_uint8m4_t, uint8, RVVM4QI, _u8m4, _u8,
_e8m4)
DEF_RVV_TYPE (vuint8m4_t, 15, __rvv_uint8m4_t, uint8, VNx64QI, VNx32QI, VNx16QI, _u8m4,
_u8, _e8m4)
/* Define tuple types for SEW = 8, LMUL = M4. */
DEF_RVV_TUPLE_TYPE (vint8m4x2_t, 16, __rvv_int8m4x2_t, vint8m4_t, int8, 2, _i8m4x2)
DEF_RVV_TUPLE_TYPE (vuint8m4x2_t, 17, __rvv_uint8m4x2_t, vuint8m4_t, uint8, 2, _u8m4x2)
/* LMUL = 8:
Machine mode = VNx128QImode when TARGET_MIN_VLEN >= 128.
Machine mode = VNx64QImode when TARGET_MIN_VLEN > 32.
Machine mode = VNx32QImode when TARGET_MIN_VLEN = 32. */
DEF_RVV_TYPE (vint8m8_t, 14, __rvv_int8m8_t, int8, VNx128QI, VNx64QI, VNx32QI, _i8m8, _i8,
Machine mode = RVVM8QImode. */
DEF_RVV_TYPE (vint8m8_t, 14, __rvv_int8m8_t, int8, RVVM8QI, _i8m8, _i8, _e8m8)
DEF_RVV_TYPE (vuint8m8_t, 15, __rvv_uint8m8_t, uint8, RVVM8QI, _u8m8, _u8,
_e8m8)
DEF_RVV_TYPE (vuint8m8_t, 15, __rvv_uint8m8_t, uint8, VNx128QI, VNx64QI, VNx32QI, _u8m8,
_u8, _e8m8)
/* LMUL = 1/4:
Only enble when TARGET_MIN_VLEN > 32.
Machine mode = VNx1HImode when TARGET_MIN_VLEN < 128.
Machine mode = VNx2HImode when TARGET_MIN_VLEN >= 128. */
DEF_RVV_TYPE (vint16mf4_t, 16, __rvv_int16mf4_t, int16, VNx2HI, VNx1HI, VOID, _i16mf4,
_i16, _e16mf4)
DEF_RVV_TYPE (vuint16mf4_t, 17, __rvv_uint16mf4_t, uint16, VNx2HI, VNx1HI, VOID,
_u16mf4, _u16, _e16mf4)
Machine mode = RVVMF4HImode. */
DEF_RVV_TYPE (vint16mf4_t, 16, __rvv_int16mf4_t, int16, RVVMF4HI, _i16mf4, _i16,
_e16mf4)
DEF_RVV_TYPE (vuint16mf4_t, 17, __rvv_uint16mf4_t, uint16, RVVMF4HI, _u16mf4,
_u16, _e16mf4)
/* Define tuple types for SEW = 16, LMUL = MF4. */
DEF_RVV_TUPLE_TYPE (vint16mf4x2_t, 18, __rvv_int16mf4x2_t, vint16mf4_t, int16, 2, _i16mf4x2)
DEF_RVV_TUPLE_TYPE (vuint16mf4x2_t, 19, __rvv_uint16mf4x2_t, vuint16mf4_t, uint16, 2, _u16mf4x2)
@ -285,13 +250,11 @@ DEF_RVV_TUPLE_TYPE (vuint16mf4x7_t, 19, __rvv_uint16mf4x7_t, vuint16mf4_t, uint1
DEF_RVV_TUPLE_TYPE (vint16mf4x8_t, 18, __rvv_int16mf4x8_t, vint16mf4_t, int16, 8, _i16mf4x8)
DEF_RVV_TUPLE_TYPE (vuint16mf4x8_t, 19, __rvv_uint16mf4x8_t, vuint16mf4_t, uint16, 8, _u16mf4x8)
/* LMUL = 1/2:
Machine mode = VNx4HImode when TARGET_MIN_VLEN >= 128.
Machine mode = VNx2HImode when TARGET_MIN_VLEN > 32.
Machine mode = VNx1HImode when TARGET_MIN_VLEN = 32. */
DEF_RVV_TYPE (vint16mf2_t, 16, __rvv_int16mf2_t, int16, VNx4HI, VNx2HI, VNx1HI, _i16mf2,
_i16, _e16mf2)
DEF_RVV_TYPE (vuint16mf2_t, 17, __rvv_uint16mf2_t, uint16, VNx4HI, VNx2HI, VNx1HI,
_u16mf2, _u16, _e16mf2)
Machine mode = RVVMF2HImode. */
DEF_RVV_TYPE (vint16mf2_t, 16, __rvv_int16mf2_t, int16, RVVMF2HI, _i16mf2, _i16,
_e16mf2)
DEF_RVV_TYPE (vuint16mf2_t, 17, __rvv_uint16mf2_t, uint16, RVVMF2HI, _u16mf2,
_u16, _e16mf2)
/* Define tuple types for SEW = 16, LMUL = MF2. */
DEF_RVV_TUPLE_TYPE (vint16mf2x2_t, 18, __rvv_int16mf2x2_t, vint16mf2_t, int16, 2, _i16mf2x2)
DEF_RVV_TUPLE_TYPE (vuint16mf2x2_t, 19, __rvv_uint16mf2x2_t, vuint16mf2_t, uint16, 2, _u16mf2x2)
@ -308,13 +271,11 @@ DEF_RVV_TUPLE_TYPE (vuint16mf2x7_t, 19, __rvv_uint16mf2x7_t, vuint16mf2_t, uint1
DEF_RVV_TUPLE_TYPE (vint16mf2x8_t, 18, __rvv_int16mf2x8_t, vint16mf2_t, int16, 8, _i16mf2x8)
DEF_RVV_TUPLE_TYPE (vuint16mf2x8_t, 19, __rvv_uint16mf2x8_t, vuint16mf2_t, uint16, 8, _u16mf2x8)
/* LMUL = 1:
Machine mode = VNx8HImode when TARGET_MIN_VLEN >= 128.
Machine mode = VNx4HImode when TARGET_MIN_VLEN > 32.
Machine mode = VNx2HImode when TARGET_MIN_VLEN = 32. */
DEF_RVV_TYPE (vint16m1_t, 15, __rvv_int16m1_t, int16, VNx8HI, VNx4HI, VNx2HI, _i16m1,
_i16, _e16m1)
DEF_RVV_TYPE (vuint16m1_t, 16, __rvv_uint16m1_t, uint16, VNx8HI, VNx4HI, VNx2HI, _u16m1,
_u16, _e16m1)
Machine mode = RVVM1HImode. */
DEF_RVV_TYPE (vint16m1_t, 15, __rvv_int16m1_t, int16, RVVM1HI, _i16m1, _i16,
_e16m1)
DEF_RVV_TYPE (vuint16m1_t, 16, __rvv_uint16m1_t, uint16, RVVM1HI, _u16m1, _u16,
_e16m1)
/* Define tuple types for SEW = 16, LMUL = M1. */
DEF_RVV_TUPLE_TYPE (vint16m1x2_t, 17, __rvv_int16m1x2_t, vint16m1_t, int16, 2, _i16m1x2)
DEF_RVV_TUPLE_TYPE (vuint16m1x2_t, 18, __rvv_uint16m1x2_t, vuint16m1_t, uint16, 2, _u16m1x2)
@ -331,13 +292,11 @@ DEF_RVV_TUPLE_TYPE (vuint16m1x7_t, 18, __rvv_uint16m1x7_t, vuint16m1_t, uint16,
DEF_RVV_TUPLE_TYPE (vint16m1x8_t, 17, __rvv_int16m1x8_t, vint16m1_t, int16, 8, _i16m1x8)
DEF_RVV_TUPLE_TYPE (vuint16m1x8_t, 18, __rvv_uint16m1x8_t, vuint16m1_t, uint16, 8, _u16m1x8)
/* LMUL = 2:
Machine mode = VNx16HImode when TARGET_MIN_VLEN >= 128.
Machine mode = VNx8HImode when TARGET_MIN_VLEN > 32.
Machine mode = VNx4HImode when TARGET_MIN_VLEN = 32. */
DEF_RVV_TYPE (vint16m2_t, 15, __rvv_int16m2_t, int16, VNx16HI, VNx8HI, VNx4HI, _i16m2,
_i16, _e16m2)
DEF_RVV_TYPE (vuint16m2_t, 16, __rvv_uint16m2_t, uint16, VNx16HI, VNx8HI, VNx4HI, _u16m2,
_u16, _e16m2)
Machine mode = RVVM1H2mode. */
DEF_RVV_TYPE (vint16m2_t, 15, __rvv_int16m2_t, int16, RVVM2HI, _i16m2, _i16,
_e16m2)
DEF_RVV_TYPE (vuint16m2_t, 16, __rvv_uint16m2_t, uint16, RVVM2HI, _u16m2, _u16,
_e16m2)
/* Define tuple types for SEW = 16, LMUL = M2. */
DEF_RVV_TUPLE_TYPE (vint16m2x2_t, 17, __rvv_int16m2x2_t, vint16m2_t, int16, 2, _i16m2x2)
DEF_RVV_TUPLE_TYPE (vuint16m2x2_t, 18, __rvv_uint16m2x2_t, vuint16m2_t, uint16, 2, _u16m2x2)
@ -346,33 +305,28 @@ DEF_RVV_TUPLE_TYPE (vuint16m2x3_t, 18, __rvv_uint16m2x3_t, vuint16m2_t, uint16,
DEF_RVV_TUPLE_TYPE (vint16m2x4_t, 17, __rvv_int16m2x4_t, vint16m2_t, int16, 4, _i16m2x4)
DEF_RVV_TUPLE_TYPE (vuint16m2x4_t, 18, __rvv_uint16m2x4_t, vuint16m2_t, uint16, 4, _u16m2x4)
/* LMUL = 4:
Machine mode = VNx32HImode when TARGET_MIN_VLEN >= 128.
Machine mode = VNx16HImode when TARGET_MIN_VLEN > 32.
Machine mode = VNx8HImode when TARGET_MIN_VLEN = 32. */
DEF_RVV_TYPE (vint16m4_t, 15, __rvv_int16m4_t, int16, VNx32HI, VNx16HI, VNx8HI, _i16m4,
_i16, _e16m4)
DEF_RVV_TYPE (vuint16m4_t, 16, __rvv_uint16m4_t, uint16, VNx32HI, VNx16HI, VNx8HI,
_u16m4, _u16, _e16m4)
Machine mode = RVVM4HImode. */
DEF_RVV_TYPE (vint16m4_t, 15, __rvv_int16m4_t, int16, RVVM4HI, _i16m4, _i16,
_e16m4)
DEF_RVV_TYPE (vuint16m4_t, 16, __rvv_uint16m4_t, uint16, RVVM4HI, _u16m4, _u16,
_e16m4)
/* Define tuple types for SEW = 16, LMUL = M4. */
DEF_RVV_TUPLE_TYPE (vint16m4x2_t, 17, __rvv_int16m4x2_t, vint16m4_t, int16, 2, _i16m4x2)
DEF_RVV_TUPLE_TYPE (vuint16m4x2_t, 18, __rvv_uint16m4x2_t, vuint16m4_t, uint16, 2, _u16m4x2)
/* LMUL = 8:
Machine mode = VNx64HImode when TARGET_MIN_VLEN >= 128.
Machine mode = VNx32HImode when TARGET_MIN_VLEN > 32.
Machine mode = VNx16HImode when TARGET_MIN_VLEN = 32. */
DEF_RVV_TYPE (vint16m8_t, 15, __rvv_int16m8_t, int16, VNx64HI, VNx32HI, VNx16HI, _i16m8,
_i16, _e16m8)
DEF_RVV_TYPE (vuint16m8_t, 16, __rvv_uint16m8_t, uint16, VNx64HI, VNx32HI, VNx16HI,
_u16m8, _u16, _e16m8)
Machine mode = RVVM8HImode. */
DEF_RVV_TYPE (vint16m8_t, 15, __rvv_int16m8_t, int16, RVVM8HI, _i16m8, _i16,
_e16m8)
DEF_RVV_TYPE (vuint16m8_t, 16, __rvv_uint16m8_t, uint16, RVVM8HI, _u16m8, _u16,
_e16m8)
/* LMUL = 1/2:
Only enble when TARGET_MIN_VLEN > 32.
Machine mode = VNx1SImode when TARGET_MIN_VLEN < 128.
Machine mode = VNx2SImode when TARGET_MIN_VLEN >= 128. */
DEF_RVV_TYPE (vint32mf2_t, 16, __rvv_int32mf2_t, int32, VNx2SI, VNx1SI, VOID, _i32mf2,
_i32, _e32mf2)
DEF_RVV_TYPE (vuint32mf2_t, 17, __rvv_uint32mf2_t, uint32, VNx2SI, VNx1SI, VOID,
_u32mf2, _u32, _e32mf2)
Machine mode = RVVMF2SImode. */
DEF_RVV_TYPE (vint32mf2_t, 16, __rvv_int32mf2_t, int32, RVVMF2SI, _i32mf2, _i32,
_e32mf2)
DEF_RVV_TYPE (vuint32mf2_t, 17, __rvv_uint32mf2_t, uint32, RVVMF2SI, _u32mf2,
_u32, _e32mf2)
/* Define tuple types for SEW = 32, LMUL = MF2. */
DEF_RVV_TUPLE_TYPE (vint32mf2x2_t, 18, __rvv_int32mf2x2_t, vint32mf2_t, int32, 2, _i32mf2x2)
DEF_RVV_TUPLE_TYPE (vuint32mf2x2_t, 19, __rvv_uint32mf2x2_t, vuint32mf2_t, uint32, 2, _u32mf2x2)
@ -389,13 +343,11 @@ DEF_RVV_TUPLE_TYPE (vuint32mf2x7_t, 19, __rvv_uint32mf2x7_t, vuint32mf2_t, uint3
DEF_RVV_TUPLE_TYPE (vint32mf2x8_t, 18, __rvv_int32mf2x8_t, vint32mf2_t, int32, 8, _i32mf2x8)
DEF_RVV_TUPLE_TYPE (vuint32mf2x8_t, 19, __rvv_uint32mf2x8_t, vuint32mf2_t, uint32, 8, _u32mf2x8)
/* LMUL = 1:
Machine mode = VNx4SImode when TARGET_MIN_VLEN >= 128.
Machine mode = VNx2SImode when TARGET_MIN_VLEN > 32.
Machine mode = VNx1SImode when TARGET_MIN_VLEN = 32. */
DEF_RVV_TYPE (vint32m1_t, 15, __rvv_int32m1_t, int32, VNx4SI, VNx2SI, VNx1SI, _i32m1,
_i32, _e32m1)
DEF_RVV_TYPE (vuint32m1_t, 16, __rvv_uint32m1_t, uint32, VNx4SI, VNx2SI, VNx1SI, _u32m1,
_u32, _e32m1)
Machine mode = RVVM1SImode. */
DEF_RVV_TYPE (vint32m1_t, 15, __rvv_int32m1_t, int32, RVVM1SI, _i32m1, _i32,
_e32m1)
DEF_RVV_TYPE (vuint32m1_t, 16, __rvv_uint32m1_t, uint32, RVVM1SI, _u32m1, _u32,
_e32m1)
/* Define tuple types for SEW = 32, LMUL = M1. */
DEF_RVV_TUPLE_TYPE (vint32m1x2_t, 17, __rvv_int32m1x2_t, vint32m1_t, int32, 2, _i32m1x2)
DEF_RVV_TUPLE_TYPE (vuint32m1x2_t, 18, __rvv_uint32m1x2_t, vuint32m1_t, uint32, 2, _u32m1x2)
@ -412,13 +364,11 @@ DEF_RVV_TUPLE_TYPE (vuint32m1x7_t, 18, __rvv_uint32m1x7_t, vuint32m1_t, uint32,
DEF_RVV_TUPLE_TYPE (vint32m1x8_t, 17, __rvv_int32m1x8_t, vint32m1_t, int32, 8, _i32m1x8)
DEF_RVV_TUPLE_TYPE (vuint32m1x8_t, 18, __rvv_uint32m1x8_t, vuint32m1_t, uint32, 8, _u32m1x8)
/* LMUL = 2:
Machine mode = VNx8SImode when TARGET_MIN_VLEN >= 128.
Machine mode = VNx4SImode when TARGET_MIN_VLEN > 32.
Machine mode = VNx2SImode when TARGET_MIN_VLEN = 32. */
DEF_RVV_TYPE (vint32m2_t, 15, __rvv_int32m2_t, int32, VNx8SI, VNx4SI, VNx2SI, _i32m2,
_i32, _e32m2)
DEF_RVV_TYPE (vuint32m2_t, 16, __rvv_uint32m2_t, uint32, VNx8SI, VNx4SI, VNx2SI, _u32m2,
_u32, _e32m2)
Machine mode = RVVM2SImode. */
DEF_RVV_TYPE (vint32m2_t, 15, __rvv_int32m2_t, int32, RVVM2SI, _i32m2, _i32,
_e32m2)
DEF_RVV_TYPE (vuint32m2_t, 16, __rvv_uint32m2_t, uint32, RVVM2SI, _u32m2, _u32,
_e32m2)
/* Define tuple types for SEW = 32, LMUL = M2. */
DEF_RVV_TUPLE_TYPE (vint32m2x2_t, 17, __rvv_int32m2x2_t, vint32m2_t, int32, 2, _i32m2x2)
DEF_RVV_TUPLE_TYPE (vuint32m2x2_t, 18, __rvv_uint32m2x2_t, vuint32m2_t, uint32, 2, _u32m2x2)
@ -427,31 +377,27 @@ DEF_RVV_TUPLE_TYPE (vuint32m2x3_t, 18, __rvv_uint32m2x3_t, vuint32m2_t, uint32,
DEF_RVV_TUPLE_TYPE (vint32m2x4_t, 17, __rvv_int32m2x4_t, vint32m2_t, int32, 4, _i32m2x4)
DEF_RVV_TUPLE_TYPE (vuint32m2x4_t, 18, __rvv_uint32m2x4_t, vuint32m2_t, uint32, 4, _u32m2x4)
/* LMUL = 4:
Machine mode = VNx16SImode when TARGET_MIN_VLEN >= 128.
Machine mode = VNx8SImode when TARGET_MIN_VLEN > 32.
Machine mode = VNx4SImode when TARGET_MIN_VLEN = 32. */
DEF_RVV_TYPE (vint32m4_t, 15, __rvv_int32m4_t, int32, VNx16SI, VNx8SI, VNx4SI, _i32m4,
_i32, _e32m4)
DEF_RVV_TYPE (vuint32m4_t, 16, __rvv_uint32m4_t, uint32, VNx16SI, VNx8SI, VNx4SI, _u32m4,
_u32, _e32m4)
Machine mode = RVVM4SImode. */
DEF_RVV_TYPE (vint32m4_t, 15, __rvv_int32m4_t, int32, RVVM4SI, _i32m4, _i32,
_e32m4)
DEF_RVV_TYPE (vuint32m4_t, 16, __rvv_uint32m4_t, uint32, RVVM4SI, _u32m4, _u32,
_e32m4)
/* Define tuple types for SEW = 32, LMUL = M4. */
DEF_RVV_TUPLE_TYPE (vint32m4x2_t, 17, __rvv_int32m4x2_t, vint32m4_t, int32, 2, _i32m4x2)
DEF_RVV_TUPLE_TYPE (vuint32m4x2_t, 18, __rvv_uint32m4x2_t, vuint32m4_t, uint32, 2, _u32m4x2)
/* LMUL = 8:
Machine mode = VNx32SImode when TARGET_MIN_VLEN >= 128.
Machine mode = VNx16SImode when TARGET_MIN_VLEN > 32.
Machine mode = VNx8SImode when TARGET_MIN_VLEN = 32. */
DEF_RVV_TYPE (vint32m8_t, 15, __rvv_int32m8_t, int32, VNx32SI, VNx16SI, VNx8SI, _i32m8,
_i32, _e32m8)
DEF_RVV_TYPE (vuint32m8_t, 16, __rvv_uint32m8_t, uint32, VNx32SI, VNx16SI, VNx8SI,
_u32m8, _u32, _e32m8)
Machine mode = RVVM8SImode. */
DEF_RVV_TYPE (vint32m8_t, 15, __rvv_int32m8_t, int32, RVVM8SI, _i32m8, _i32,
_e32m8)
DEF_RVV_TYPE (vuint32m8_t, 16, __rvv_uint32m8_t, uint32, RVVM8SI, _u32m8, _u32,
_e32m8)
/* SEW = 64:
Disable when !TARGET_VECTOR_ELEN_64. */
DEF_RVV_TYPE (vint64m1_t, 15, __rvv_int64m1_t, int64, VNx2DI, VNx1DI, VOID, _i64m1,
_i64, _e64m1)
DEF_RVV_TYPE (vuint64m1_t, 16, __rvv_uint64m1_t, uint64, VNx2DI, VNx1DI, VOID, _u64m1,
_u64, _e64m1)
DEF_RVV_TYPE (vint64m1_t, 15, __rvv_int64m1_t, int64, RVVM1DI, _i64m1, _i64,
_e64m1)
DEF_RVV_TYPE (vuint64m1_t, 16, __rvv_uint64m1_t, uint64, RVVM1DI, _u64m1, _u64,
_e64m1)
/* Define tuple types for SEW = 64, LMUL = M1. */
DEF_RVV_TUPLE_TYPE (vint64m1x2_t, 17, __rvv_int64m1x2_t, vint64m1_t, int64, 2, _i64m1x2)
DEF_RVV_TUPLE_TYPE (vuint64m1x2_t, 18, __rvv_uint64m1x2_t, vuint64m1_t, uint64, 2, _u64m1x2)
@ -467,10 +413,10 @@ DEF_RVV_TUPLE_TYPE (vint64m1x7_t, 17, __rvv_int64m1x7_t, vint64m1_t, int64, 7, _
DEF_RVV_TUPLE_TYPE (vuint64m1x7_t, 18, __rvv_uint64m1x7_t, vuint64m1_t, uint64, 7, _u64m1x7)
DEF_RVV_TUPLE_TYPE (vint64m1x8_t, 17, __rvv_int64m1x8_t, vint64m1_t, int64, 8, _i64m1x8)
DEF_RVV_TUPLE_TYPE (vuint64m1x8_t, 18, __rvv_uint64m1x8_t, vuint64m1_t, uint64, 8, _u64m1x8)
DEF_RVV_TYPE (vint64m2_t, 15, __rvv_int64m2_t, int64, VNx4DI, VNx2DI, VOID, _i64m2,
_i64, _e64m2)
DEF_RVV_TYPE (vuint64m2_t, 16, __rvv_uint64m2_t, uint64, VNx4DI, VNx2DI, VOID, _u64m2,
_u64, _e64m2)
DEF_RVV_TYPE (vint64m2_t, 15, __rvv_int64m2_t, int64, RVVM2DI, _i64m2, _i64,
_e64m2)
DEF_RVV_TYPE (vuint64m2_t, 16, __rvv_uint64m2_t, uint64, RVVM2DI, _u64m2, _u64,
_e64m2)
/* Define tuple types for SEW = 64, LMUL = M2. */
DEF_RVV_TUPLE_TYPE (vint64m2x2_t, 17, __rvv_int64m2x2_t, vint64m2_t, int64, 2, _i64m2x2)
DEF_RVV_TUPLE_TYPE (vuint64m2x2_t, 18, __rvv_uint64m2x2_t, vuint64m2_t, uint64, 2, _u64m2x2)
@ -478,22 +424,22 @@ DEF_RVV_TUPLE_TYPE (vint64m2x3_t, 17, __rvv_int64m2x3_t, vint64m2_t, int64, 3, _
DEF_RVV_TUPLE_TYPE (vuint64m2x3_t, 18, __rvv_uint64m2x3_t, vuint64m2_t, uint64, 3, _u64m2x3)
DEF_RVV_TUPLE_TYPE (vint64m2x4_t, 17, __rvv_int64m2x4_t, vint64m2_t, int64, 4, _i64m2x4)
DEF_RVV_TUPLE_TYPE (vuint64m2x4_t, 18, __rvv_uint64m2x4_t, vuint64m2_t, uint64, 4, _u64m2x4)
DEF_RVV_TYPE (vint64m4_t, 15, __rvv_int64m4_t, int64, VNx8DI, VNx4DI, VOID, _i64m4,
_i64, _e64m4)
DEF_RVV_TYPE (vuint64m4_t, 16, __rvv_uint64m4_t, uint64, VNx8DI, VNx4DI, VOID, _u64m4,
_u64, _e64m4)
DEF_RVV_TYPE (vint64m4_t, 15, __rvv_int64m4_t, int64, RVVM4DI, _i64m4, _i64,
_e64m4)
DEF_RVV_TYPE (vuint64m4_t, 16, __rvv_uint64m4_t, uint64, RVVM4DI, _u64m4, _u64,
_e64m4)
/* Define tuple types for SEW = 64, LMUL = M4. */
DEF_RVV_TUPLE_TYPE (vint64m4x2_t, 17, __rvv_int64m4x2_t, vint64m4_t, int64, 2, _i64m4x2)
DEF_RVV_TUPLE_TYPE (vuint64m4x2_t, 18, __rvv_uint64m4x2_t, vuint64m4_t, uint64, 2, _u64m4x2)
DEF_RVV_TYPE (vint64m8_t, 15, __rvv_int64m8_t, int64, VNx16DI, VNx8DI, VOID, _i64m8,
_i64, _e64m8)
DEF_RVV_TYPE (vuint64m8_t, 16, __rvv_uint64m8_t, uint64, VNx16DI, VNx8DI, VOID, _u64m8,
_u64, _e64m8)
DEF_RVV_TYPE (vint64m8_t, 15, __rvv_int64m8_t, int64, RVVM8DI, _i64m8, _i64,
_e64m8)
DEF_RVV_TYPE (vuint64m8_t, 16, __rvv_uint64m8_t, uint64, RVVM8DI, _u64m8, _u64,
_e64m8)
/* Enabled if TARGET_VECTOR_ELEN_FP_16 && (TARGET_ZVFH or TARGET_ZVFHMIN). */
/* LMUL = 1/4. */
DEF_RVV_TYPE (vfloat16mf4_t, 18, __rvv_float16mf4_t, float16, VNx2HF, VNx1HF, VOID,
_f16mf4, _f16, _e16mf4)
DEF_RVV_TYPE (vfloat16mf4_t, 18, __rvv_float16mf4_t, float16, RVVMF4HF, _f16mf4,
_f16, _e16mf4)
/* Define tuple types for SEW = 16, LMUL = MF4. */
DEF_RVV_TUPLE_TYPE (vfloat16mf4x2_t, 20, __rvv_float16mf4x2_t, vfloat16mf4_t, float, 2, _f16mf4x2)
DEF_RVV_TUPLE_TYPE (vfloat16mf4x3_t, 20, __rvv_float16mf4x3_t, vfloat16mf4_t, float, 3, _f16mf4x3)
@ -503,8 +449,8 @@ DEF_RVV_TUPLE_TYPE (vfloat16mf4x6_t, 20, __rvv_float16mf4x6_t, vfloat16mf4_t, fl
DEF_RVV_TUPLE_TYPE (vfloat16mf4x7_t, 20, __rvv_float16mf4x7_t, vfloat16mf4_t, float, 7, _f16mf4x7)
DEF_RVV_TUPLE_TYPE (vfloat16mf4x8_t, 20, __rvv_float16mf4x8_t, vfloat16mf4_t, float, 8, _f16mf4x8)
/* LMUL = 1/2. */
DEF_RVV_TYPE (vfloat16mf2_t, 18, __rvv_float16mf2_t, float16, VNx4HF, VNx2HF, VNx1HF,
_f16mf2, _f16, _e16mf2)
DEF_RVV_TYPE (vfloat16mf2_t, 18, __rvv_float16mf2_t, float16, RVVMF2HF, _f16mf2,
_f16, _e16mf2)
/* Define tuple types for SEW = 16, LMUL = MF2. */
DEF_RVV_TUPLE_TYPE (vfloat16mf2x2_t, 20, __rvv_float16mf2x2_t, vfloat16mf2_t, float, 2, _f16mf2x2)
DEF_RVV_TUPLE_TYPE (vfloat16mf2x3_t, 20, __rvv_float16mf2x3_t, vfloat16mf2_t, float, 3, _f16mf2x3)
@ -514,8 +460,8 @@ DEF_RVV_TUPLE_TYPE (vfloat16mf2x6_t, 20, __rvv_float16mf2x6_t, vfloat16mf2_t, fl
DEF_RVV_TUPLE_TYPE (vfloat16mf2x7_t, 20, __rvv_float16mf2x7_t, vfloat16mf2_t, float, 7, _f16mf2x7)
DEF_RVV_TUPLE_TYPE (vfloat16mf2x8_t, 20, __rvv_float16mf2x8_t, vfloat16mf2_t, float, 8, _f16mf2x8)
/* LMUL = 1. */
DEF_RVV_TYPE (vfloat16m1_t, 17, __rvv_float16m1_t, float16, VNx8HF, VNx4HF, VNx2HF,
_f16m1, _f16, _e16m1)
DEF_RVV_TYPE (vfloat16m1_t, 17, __rvv_float16m1_t, float16, RVVM1HF, _f16m1,
_f16, _e16m1)
/* Define tuple types for SEW = 16, LMUL = M1. */
DEF_RVV_TUPLE_TYPE (vfloat16m1x2_t, 19, __rvv_float16m1x2_t, vfloat16m1_t, float, 2, _f16m1x2)
DEF_RVV_TUPLE_TYPE (vfloat16m1x3_t, 19, __rvv_float16m1x3_t, vfloat16m1_t, float, 3, _f16m1x3)
@ -525,28 +471,27 @@ DEF_RVV_TUPLE_TYPE (vfloat16m1x6_t, 19, __rvv_float16m1x6_t, vfloat16m1_t, float
DEF_RVV_TUPLE_TYPE (vfloat16m1x7_t, 19, __rvv_float16m1x7_t, vfloat16m1_t, float, 7, _f16m1x7)
DEF_RVV_TUPLE_TYPE (vfloat16m1x8_t, 19, __rvv_float16m1x8_t, vfloat16m1_t, float, 8, _f16m1x8)
/* LMUL = 2. */
DEF_RVV_TYPE (vfloat16m2_t, 17, __rvv_float16m2_t, float16, VNx16HF, VNx8HF, VNx4HF,
_f16m2, _f16, _e16m2)
DEF_RVV_TYPE (vfloat16m2_t, 17, __rvv_float16m2_t, float16, RVVM2HF, _f16m2,
_f16, _e16m2)
/* Define tuple types for SEW = 16, LMUL = M2. */
DEF_RVV_TUPLE_TYPE (vfloat16m2x2_t, 19, __rvv_float16m2x2_t, vfloat16m2_t, float, 2, _f16m2x2)
DEF_RVV_TUPLE_TYPE (vfloat16m2x3_t, 19, __rvv_float16m2x3_t, vfloat16m2_t, float, 3, _f16m2x3)
DEF_RVV_TUPLE_TYPE (vfloat16m2x4_t, 19, __rvv_float16m2x4_t, vfloat16m2_t, float, 4, _f16m2x4)
/* LMUL = 4. */
DEF_RVV_TYPE (vfloat16m4_t, 17, __rvv_float16m4_t, float16, VNx32HF, VNx16HF, VNx8HF,
_f16m4, _f16, _e16m4)
DEF_RVV_TYPE (vfloat16m4_t, 17, __rvv_float16m4_t, float16, RVVM4HF, _f16m4,
_f16, _e16m4)
/* Define tuple types for SEW = 16, LMUL = M4. */
DEF_RVV_TUPLE_TYPE (vfloat16m4x2_t, 19, __rvv_float16m4x2_t, vfloat16m4_t, float, 2, _f16m4x2)
/* LMUL = 8. */
DEF_RVV_TYPE (vfloat16m8_t, 16, __rvv_float16m8_t, float16, VNx64HF, VNx32HF, VNx16HF,
_f16m8, _f16, _e16m8)
DEF_RVV_TYPE (vfloat16m8_t, 16, __rvv_float16m8_t, float16, RVVM8HF, _f16m8,
_f16, _e16m8)
/* Disable all when !TARGET_VECTOR_ELEN_FP_32. */
/* LMUL = 1/2:
Only enble when TARGET_MIN_VLEN > 32.
Machine mode = VNx1SFmode when TARGET_MIN_VLEN < 128.
Machine mode = VNx2SFmode when TARGET_MIN_VLEN >= 128. */
DEF_RVV_TYPE (vfloat32mf2_t, 18, __rvv_float32mf2_t, float, VNx2SF, VNx1SF, VOID,
_f32mf2, _f32, _e32mf2)
Machine mode = RVVMF2SFmode. */
DEF_RVV_TYPE (vfloat32mf2_t, 18, __rvv_float32mf2_t, float, RVVMF2SF, _f32mf2,
_f32, _e32mf2)
/* Define tuple types for SEW = 32, LMUL = MF2. */
DEF_RVV_TUPLE_TYPE (vfloat32mf2x2_t, 20, __rvv_float32mf2x2_t, vfloat32mf2_t, float, 2, _f32mf2x2)
DEF_RVV_TUPLE_TYPE (vfloat32mf2x3_t, 20, __rvv_float32mf2x3_t, vfloat32mf2_t, float, 3, _f32mf2x3)
@ -556,11 +501,9 @@ DEF_RVV_TUPLE_TYPE (vfloat32mf2x6_t, 20, __rvv_float32mf2x6_t, vfloat32mf2_t, fl
DEF_RVV_TUPLE_TYPE (vfloat32mf2x7_t, 20, __rvv_float32mf2x7_t, vfloat32mf2_t, float, 7, _f32mf2x7)
DEF_RVV_TUPLE_TYPE (vfloat32mf2x8_t, 20, __rvv_float32mf2x8_t, vfloat32mf2_t, float, 8, _f32mf2x8)
/* LMUL = 1:
Machine mode = VNx4SFmode when TARGET_MIN_VLEN >= 128.
Machine mode = VNx2SFmode when TARGET_MIN_VLEN > 32.
Machine mode = VNx1SFmode when TARGET_MIN_VLEN = 32. */
DEF_RVV_TYPE (vfloat32m1_t, 17, __rvv_float32m1_t, float, VNx4SF, VNx2SF, VNx1SF,
_f32m1, _f32, _e32m1)
Machine mode = RVVM1SFmode. */
DEF_RVV_TYPE (vfloat32m1_t, 17, __rvv_float32m1_t, float, RVVM1SF, _f32m1, _f32,
_e32m1)
/* Define tuple types for SEW = 32, LMUL = M1. */
DEF_RVV_TUPLE_TYPE (vfloat32m1x2_t, 19, __rvv_float32m1x2_t, vfloat32m1_t, float, 2, _f32m1x2)
DEF_RVV_TUPLE_TYPE (vfloat32m1x3_t, 19, __rvv_float32m1x3_t, vfloat32m1_t, float, 3, _f32m1x3)
@ -570,33 +513,27 @@ DEF_RVV_TUPLE_TYPE (vfloat32m1x6_t, 19, __rvv_float32m1x6_t, vfloat32m1_t, float
DEF_RVV_TUPLE_TYPE (vfloat32m1x7_t, 19, __rvv_float32m1x7_t, vfloat32m1_t, float, 7, _f32m1x7)
DEF_RVV_TUPLE_TYPE (vfloat32m1x8_t, 19, __rvv_float32m1x8_t, vfloat32m1_t, float, 8, _f32m1x8)
/* LMUL = 2:
Machine mode = VNx8SFmode when TARGET_MIN_VLEN >= 128.
Machine mode = VNx4SFmode when TARGET_MIN_VLEN > 32.
Machine mode = VNx2SFmode when TARGET_MIN_VLEN = 32. */
DEF_RVV_TYPE (vfloat32m2_t, 17, __rvv_float32m2_t, float, VNx8SF, VNx4SF, VNx2SF,
_f32m2, _f32, _e32m2)
Machine mode = RVVM2SFmode. */
DEF_RVV_TYPE (vfloat32m2_t, 17, __rvv_float32m2_t, float, RVVM2SF, _f32m2, _f32,
_e32m2)
/* Define tuple types for SEW = 32, LMUL = M2. */
DEF_RVV_TUPLE_TYPE (vfloat32m2x2_t, 19, __rvv_float32m2x2_t, vfloat32m2_t, float, 2, _f32m2x2)
DEF_RVV_TUPLE_TYPE (vfloat32m2x3_t, 19, __rvv_float32m2x3_t, vfloat32m2_t, float, 3, _f32m2x3)
DEF_RVV_TUPLE_TYPE (vfloat32m2x4_t, 19, __rvv_float32m2x4_t, vfloat32m2_t, float, 4, _f32m2x4)
/* LMUL = 4:
Machine mode = VNx16SFmode when TARGET_MIN_VLEN >= 128.
Machine mode = VNx8SFmode when TARGET_MIN_VLEN > 32.
Machine mode = VNx4SFmode when TARGET_MIN_VLEN = 32. */
DEF_RVV_TYPE (vfloat32m4_t, 17, __rvv_float32m4_t, float, VNx16SF, VNx8SF, VNx4SF,
_f32m4, _f32, _e32m4)
Machine mode = RVVM4SFmode. */
DEF_RVV_TYPE (vfloat32m4_t, 17, __rvv_float32m4_t, float, RVVM4SF, _f32m4, _f32,
_e32m4)
/* Define tuple types for SEW = 32, LMUL = M4. */
DEF_RVV_TUPLE_TYPE (vfloat32m4x2_t, 19, __rvv_float32m4x2_t, vfloat32m4_t, float, 2, _f32m4x2)
/* LMUL = 8:
Machine mode = VNx32SFmode when TARGET_MIN_VLEN >= 128.
Machine mode = VNx16SFmode when TARGET_MIN_VLEN > 32.
Machine mode = VNx8SFmode when TARGET_MIN_VLEN = 32. */
DEF_RVV_TYPE (vfloat32m8_t, 17, __rvv_float32m8_t, float, VNx32SF, VNx16SF, VNx8SF,
_f32m8, _f32, _e32m8)
Machine mode = RVVM8SFmode. */
DEF_RVV_TYPE (vfloat32m8_t, 17, __rvv_float32m8_t, float, RVVM8SF, _f32m8, _f32,
_e32m8)
/* SEW = 64:
Disable when !TARGET_VECTOR_ELEN_FP_64. */
DEF_RVV_TYPE (vfloat64m1_t, 17, __rvv_float64m1_t, double, VNx2DF, VNx1DF, VOID, _f64m1,
DEF_RVV_TYPE (vfloat64m1_t, 17, __rvv_float64m1_t, double, RVVM1DF, _f64m1,
_f64, _e64m1)
/* Define tuple types for SEW = 64, LMUL = M1. */
DEF_RVV_TUPLE_TYPE (vfloat64m1x2_t, 19, __rvv_float64m1x2_t, vfloat64m1_t, double, 2, _f64m1x2)
@ -606,17 +543,17 @@ DEF_RVV_TUPLE_TYPE (vfloat64m1x5_t, 19, __rvv_float64m1x5_t, vfloat64m1_t, doubl
DEF_RVV_TUPLE_TYPE (vfloat64m1x6_t, 19, __rvv_float64m1x6_t, vfloat64m1_t, double, 6, _f64m1x6)
DEF_RVV_TUPLE_TYPE (vfloat64m1x7_t, 19, __rvv_float64m1x7_t, vfloat64m1_t, double, 7, _f64m1x7)
DEF_RVV_TUPLE_TYPE (vfloat64m1x8_t, 19, __rvv_float64m1x8_t, vfloat64m1_t, double, 8, _f64m1x8)
DEF_RVV_TYPE (vfloat64m2_t, 17, __rvv_float64m2_t, double, VNx4DF, VNx2DF, VOID, _f64m2,
DEF_RVV_TYPE (vfloat64m2_t, 17, __rvv_float64m2_t, double, RVVM2DF, _f64m2,
_f64, _e64m2)
/* Define tuple types for SEW = 64, LMUL = M2. */
DEF_RVV_TUPLE_TYPE (vfloat64m2x2_t, 19, __rvv_float64m2x2_t, vfloat64m2_t, double, 2, _f64m2x2)
DEF_RVV_TUPLE_TYPE (vfloat64m2x3_t, 19, __rvv_float64m2x3_t, vfloat64m2_t, double, 3, _f64m2x3)
DEF_RVV_TUPLE_TYPE (vfloat64m2x4_t, 19, __rvv_float64m2x4_t, vfloat64m2_t, double, 4, _f64m2x4)
DEF_RVV_TYPE (vfloat64m4_t, 17, __rvv_float64m4_t, double, VNx8DF, VNx4DF, VOID, _f64m4,
DEF_RVV_TYPE (vfloat64m4_t, 17, __rvv_float64m4_t, double, RVVM4DF, _f64m4,
_f64, _e64m4)
/* Define tuple types for SEW = 64, LMUL = M4. */
DEF_RVV_TUPLE_TYPE (vfloat64m4x2_t, 19, __rvv_float64m4x2_t, vfloat64m4_t, double, 2, _f64m4x2)
DEF_RVV_TYPE (vfloat64m8_t, 17, __rvv_float64m8_t, double, VNx16DF, VNx8DF, VOID, _f64m8,
DEF_RVV_TYPE (vfloat64m8_t, 17, __rvv_float64m8_t, double, RVVM8DF, _f64m8,
_f64, _e64m8)
DEF_RVV_OP_TYPE (vv)

View file

@ -31,345 +31,260 @@ along with GCC; see the file COPYING3. If not see
Note: N/A means the corresponding vector type is disabled.
|Types |LMUL=1|LMUL=2 |LMUL=4 |LMUL=8 |LMUL=1/2|LMUL=1/4|LMUL=1/8|
|int64_t |VNx1DI|VNx2DI |VNx4DI |VNx8DI |N/A |N/A |N/A |
|uint64_t|VNx1DI|VNx2DI |VNx4DI |VNx8DI |N/A |N/A |N/A |
|int32_t |VNx2SI|VNx4SI |VNx8SI |VNx16SI|VNx1SI |N/A |N/A |
|uint32_t|VNx2SI|VNx4SI |VNx8SI |VNx16SI|VNx1SI |N/A |N/A |
|int16_t |VNx4HI|VNx8HI |VNx16HI|VNx32HI|VNx2HI |VNx1HI |N/A |
|uint16_t|VNx4HI|VNx8HI |VNx16HI|VNx32HI|VNx2HI |VNx1HI |N/A |
|int8_t |VNx8QI|VNx16QI|VNx32QI|VNx64QI|VNx4QI |VNx2QI |VNx1QI |
|uint8_t |VNx8QI|VNx16QI|VNx32QI|VNx64QI|VNx4QI |VNx2QI |VNx1QI |
|float64 |VNx1DF|VNx2DF |VNx4DF |VNx8DF |N/A |N/A |N/A |
|float32 |VNx2SF|VNx4SF |VNx8SF |VNx16SF|VNx1SF |N/A |N/A |
|float16 |VNx4HF|VNx8HF |VNx16HF|VNx32HF|VNx2HF |VNx1HF |N/A |
Encode SEW and LMUL into data types.
We enforce the constraint LMUL SEW/ELEN in the implementation.
There are the following data types for ELEN = 64.
Mask Types Encode the ratio of SEW/LMUL into the
mask types. There are the following mask types.
|Modes|LMUL=1 |LMUL=2 |LMUL=4 |LMUL=8 |LMUL=1/2|LMUL=1/4|LMUL=1/8|
|DI |RVVM1DI|RVVM2DI|RVVM4DI|RVVM8DI|N/A |N/A |N/A |
|SI |RVVM1SI|RVVM2SI|RVVM4SI|RVVM8SI|RVVMF2SI|N/A |N/A |
|HI |RVVM1HI|RVVM2HI|RVVM4HI|RVVM8HI|RVVMF2HI|RVVMF4HI|N/A |
|QI |RVVM1QI|RVVM2QI|RVVM4QI|RVVM8QI|RVVMF2QI|RVVMF4QI|RVVMF8QI|
|DF |RVVM1DF|RVVM2DF|RVVM4DF|RVVM8DF|N/A |N/A |N/A |
|SF |RVVM1SF|RVVM2SF|RVVM4SF|RVVM8SF|RVVMF2SF|N/A |N/A |
|HF |RVVM1HF|RVVM2HF|RVVM4HF|RVVM8HF|RVVMF2HF|RVVMF4HF|N/A |
n = SEW/LMUL
There are the following data types for ELEN = 32.
|Types|n=1 |n=2 |n=4 |n=8 |n=16 |n=32 |n=64 |
|bool |VNx64BI|VNx32BI|VNx16BI|VNx8BI|VNx4BI|VNx2BI|VNx1BI|
|Modes|LMUL=1 |LMUL=2 |LMUL=4 |LMUL=8 |LMUL=1/2|LMUL=1/4|LMUL=1/8|
|SI |RVVM1SI|RVVM2SI|RVVM4SI|RVVM8SI|N/A |N/A |N/A |
|HI |RVVM1HI|RVVM2HI|RVVM4HI|RVVM8HI|RVVMF2HI|N/A |N/A |
|QI |RVVM1QI|RVVM2QI|RVVM4QI|RVVM8QI|RVVMF2QI|RVVMF4QI|N/A |
|SF |RVVM1SF|RVVM2SF|RVVM4SF|RVVM8SF|N/A |N/A |N/A |
|HF |RVVM1HF|RVVM2HF|RVVM4HF|RVVM8HF|RVVMF2HF|N/A |N/A |
There are the following data types for MIN_VLEN = 32.
Encode the ratio of SEW/LMUL into the mask types.
There are the following mask types.
|Types |LMUL=1|LMUL=2|LMUL=4 |LMUL=8 |LMUL=1/2|LMUL=1/4|LMUL=1/8|
|int64_t |N/A |N/A |N/A |N/A |N/A |N/A |N/A |
|uint64_t|N/A |N/A |N/A |N/A |N/A |N/A |N/A |
|int32_t |VNx1SI|VNx2SI|VNx4SI |VNx8SI |N/A |N/A |N/A |
|uint32_t|VNx1SI|VNx2SI|VNx4SI |VNx8SI |N/A |N/A |N/A |
|int16_t |VNx2HI|VNx4HI|VNx8HI |VNx16HI|VNx1HI |N/A |N/A |
|uint16_t|VNx2HI|VNx4HI|VNx8HI |VNx16HI|VNx1HI |N/A |N/A |
|int8_t |VNx4QI|VNx8QI|VNx16QI|VNx32QI|VNx2QI |VNx1QI |N/A |
|uint8_t |VNx4QI|VNx8QI|VNx16QI|VNx32QI|VNx2QI |VNx1QI |N/A |
|float64 |N/A |N/A |N/A |N/A |N/A |N/A |N/A |
|float32 |VNx1SF|VNx2SF|VNx4SF |VNx8SF |N/A |N/A |N/A |
|float16 |VNx2HF|VNx4HF|VNx8HF |VNx16HF|VNx1HF |N/A |N/A |
n = SEW/LMUL
Mask Types Encode the ratio of SEW/LMUL into the
mask types. There are the following mask types.
n = SEW/LMUL
|Types|n=1 |n=2 |n=4 |n=8 |n=16 |n=32 |n=64|
|bool |VNx32BI|VNx16BI|VNx8BI|VNx4BI|VNx2BI|VNx1BI|N/A |
TODO: FP16 vector needs support of 'zvfh', we don't support it yet. */
|Modes| n = 1 | n = 2 | n = 4 | n = 8 | n = 16 | n = 32 | n = 64 |
|BI |RVVM1BI|RVVMF2BI|RVVMF4BI|RVVMF8BI|RVVMF16BI|RVVMF32BI|RVVMF64BI| */
/* Return 'REQUIREMENT' for machine_mode 'MODE'.
For example: 'MODE' = VNx64BImode needs TARGET_MIN_VLEN > 32. */
For example: 'MODE' = RVVMF64BImode needs TARGET_MIN_VLEN > 32. */
#ifndef ENTRY
#define ENTRY(MODE, REQUIREMENT, VLMUL_FOR_MIN_VLEN32, RATIO_FOR_MIN_VLEN32, \
VLMUL_FOR_MIN_VLEN64, RATIO_FOR_MIN_VLEN64, \
VLMUL_FOR_MIN_VLEN128, RATIO_FOR_MIN_VLEN128)
#define ENTRY(MODE, REQUIREMENT, VLMUL, RATIO)
#endif
/* Disable modes if TARGET_MIN_VLEN == 32. */
ENTRY (RVVMF64BI, TARGET_MIN_VLEN > 32, LMUL_F8, 64)
ENTRY (RVVMF32BI, true, LMUL_F4, 32)
ENTRY (RVVMF16BI, true, LMUL_F2, 16)
ENTRY (RVVMF8BI, true, LMUL_1, 8)
ENTRY (RVVMF4BI, true, LMUL_2, 4)
ENTRY (RVVMF2BI, true, LMUL_4, 2)
ENTRY (RVVM1BI, true, LMUL_8, 1)
/* Disable modes if TARGET_MIN_VLEN == 32. */
ENTRY (RVVM8QI, true, LMUL_8, 1)
ENTRY (RVVM4QI, true, LMUL_4, 2)
ENTRY (RVVM2QI, true, LMUL_2, 4)
ENTRY (RVVM1QI, true, LMUL_1, 8)
ENTRY (RVVMF2QI, true, LMUL_F2, 16)
ENTRY (RVVMF4QI, true, LMUL_F4, 32)
ENTRY (RVVMF8QI, TARGET_MIN_VLEN > 32, LMUL_F8, 64)
/* Disable modes if TARGET_MIN_VLEN == 32. */
ENTRY (RVVM8HI, true, LMUL_8, 2)
ENTRY (RVVM4HI, true, LMUL_4, 4)
ENTRY (RVVM2HI, true, LMUL_2, 8)
ENTRY (RVVM1HI, true, LMUL_1, 16)
ENTRY (RVVMF2HI, true, LMUL_F2, 32)
ENTRY (RVVMF4HI, TARGET_MIN_VLEN > 32, LMUL_F4, 64)
/* Disable modes if TARGET_MIN_VLEN == 32 or !TARGET_VECTOR_ELEN_FP_16. */
ENTRY (RVVM8HF, TARGET_VECTOR_ELEN_FP_16, LMUL_8, 2)
ENTRY (RVVM4HF, TARGET_VECTOR_ELEN_FP_16, LMUL_4, 4)
ENTRY (RVVM2HF, TARGET_VECTOR_ELEN_FP_16, LMUL_2, 8)
ENTRY (RVVM1HF, TARGET_VECTOR_ELEN_FP_16, LMUL_1, 16)
ENTRY (RVVMF2HF, TARGET_VECTOR_ELEN_FP_16, LMUL_F2, 32)
ENTRY (RVVMF4HF, TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32, LMUL_F4, 64)
/* Disable modes if TARGET_MIN_VLEN == 32. */
ENTRY (RVVM8SI, true, LMUL_8, 4)
ENTRY (RVVM4SI, true, LMUL_4, 8)
ENTRY (RVVM2SI, true, LMUL_2, 16)
ENTRY (RVVM1SI, true, LMUL_1, 32)
ENTRY (RVVMF2SI, TARGET_MIN_VLEN > 32, LMUL_F2, 64)
/* Disable modes if TARGET_MIN_VLEN == 32 or !TARGET_VECTOR_ELEN_FP_32. */
ENTRY (RVVM8SF, TARGET_VECTOR_ELEN_FP_32, LMUL_8, 4)
ENTRY (RVVM4SF, TARGET_VECTOR_ELEN_FP_32, LMUL_4, 8)
ENTRY (RVVM2SF, TARGET_VECTOR_ELEN_FP_32, LMUL_2, 16)
ENTRY (RVVM1SF, TARGET_VECTOR_ELEN_FP_32, LMUL_1, 32)
ENTRY (RVVMF2SF, TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32, LMUL_F2, 64)
/* Disable modes if !TARGET_VECTOR_ELEN_64. */
ENTRY (RVVM8DI, TARGET_VECTOR_ELEN_64, LMUL_8, 8)
ENTRY (RVVM4DI, TARGET_VECTOR_ELEN_64, LMUL_4, 16)
ENTRY (RVVM2DI, TARGET_VECTOR_ELEN_64, LMUL_2, 32)
ENTRY (RVVM1DI, TARGET_VECTOR_ELEN_64, LMUL_1, 64)
/* Disable modes if !TARGET_VECTOR_ELEN_FP_64. */
ENTRY (RVVM8DF, TARGET_VECTOR_ELEN_FP_64, LMUL_8, 8)
ENTRY (RVVM4DF, TARGET_VECTOR_ELEN_FP_64, LMUL_4, 16)
ENTRY (RVVM2DF, TARGET_VECTOR_ELEN_FP_64, LMUL_2, 32)
ENTRY (RVVM1DF, TARGET_VECTOR_ELEN_FP_64, LMUL_1, 64)
/* Tuple modes for segment loads/stores according to NF.
Tuple modes format: RVV<LMUL>x<NF><BASEMODE>
When LMUL is MF8/MF4/MF2/M1, NF can be 2 ~ 8.
When LMUL is M2, NF can be 2 ~ 4.
When LMUL is M4, NF can be 4. */
#ifndef TUPLE_ENTRY
#define TUPLE_ENTRY(MODE, REQUIREMENT, SUBPART_MODE, NF, VLMUL_FOR_MIN_VLEN32, \
RATIO_FOR_MIN_VLEN32, VLMUL_FOR_MIN_VLEN64, \
RATIO_FOR_MIN_VLEN64, VLMUL_FOR_MIN_VLEN128, \
RATIO_FOR_MIN_VLEN128)
#define TUPLE_ENTRY(MODE, REQUIREMENT, SUBPART_MODE, NF, VLMUL, RATIO)
#endif
/* Mask modes. Disable VNx64BImode when TARGET_MIN_VLEN == 32. */
ENTRY (VNx128BI, TARGET_MIN_VLEN >= 128, LMUL_RESERVED, 0, LMUL_RESERVED, 0, LMUL_8, 1)
ENTRY (VNx64BI, TARGET_MIN_VLEN > 32, LMUL_RESERVED, 0, LMUL_8, 1, LMUL_4, 2)
ENTRY (VNx32BI, true, LMUL_8, 1, LMUL_4, 2, LMUL_2, 4)
ENTRY (VNx16BI, true, LMUL_4, 2, LMUL_2, 4, LMUL_1, 8)
ENTRY (VNx8BI, true, LMUL_2, 4, LMUL_1, 8, LMUL_F2, 16)
ENTRY (VNx4BI, true, LMUL_1, 8, LMUL_F2, 16, LMUL_F4, 32)
ENTRY (VNx2BI, true, LMUL_F2, 16, LMUL_F4, 32, LMUL_F8, 64)
ENTRY (VNx1BI, TARGET_MIN_VLEN < 128, LMUL_F4, 32, LMUL_F8, 64, LMUL_RESERVED, 0)
TUPLE_ENTRY (RVVM1x8QI, true, RVVM1QI, 8, LMUL_1, 8)
TUPLE_ENTRY (RVVMF2x8QI, true, RVVMF2QI, 8, LMUL_F2, 16)
TUPLE_ENTRY (RVVMF4x8QI, true, RVVMF4QI, 8, LMUL_F4, 32)
TUPLE_ENTRY (RVVMF8x8QI, TARGET_MIN_VLEN > 32, RVVMF8QI, 8, LMUL_F8, 64)
TUPLE_ENTRY (RVVM1x7QI, true, RVVM1QI, 7, LMUL_1, 8)
TUPLE_ENTRY (RVVMF2x7QI, true, RVVMF2QI, 7, LMUL_F2, 16)
TUPLE_ENTRY (RVVMF4x7QI, true, RVVMF4QI, 7, LMUL_F4, 32)
TUPLE_ENTRY (RVVMF8x7QI, TARGET_MIN_VLEN > 32, RVVMF8QI, 7, LMUL_F8, 64)
TUPLE_ENTRY (RVVM1x6QI, true, RVVM1QI, 6, LMUL_1, 8)
TUPLE_ENTRY (RVVMF2x6QI, true, RVVMF2QI, 6, LMUL_F2, 16)
TUPLE_ENTRY (RVVMF4x6QI, true, RVVMF4QI, 6, LMUL_F4, 32)
TUPLE_ENTRY (RVVMF8x6QI, TARGET_MIN_VLEN > 32, RVVMF8QI, 6, LMUL_F8, 64)
TUPLE_ENTRY (RVVM1x5QI, true, RVVM1QI, 5, LMUL_1, 8)
TUPLE_ENTRY (RVVMF2x5QI, true, RVVMF2QI, 5, LMUL_F2, 16)
TUPLE_ENTRY (RVVMF4x5QI, true, RVVMF4QI, 5, LMUL_F4, 32)
TUPLE_ENTRY (RVVMF8x5QI, TARGET_MIN_VLEN > 32, RVVMF8QI, 5, LMUL_F8, 64)
TUPLE_ENTRY (RVVM2x4QI, true, RVVM2QI, 4, LMUL_2, 4)
TUPLE_ENTRY (RVVM1x4QI, true, RVVM1QI, 4, LMUL_1, 8)
TUPLE_ENTRY (RVVMF2x4QI, true, RVVMF2QI, 4, LMUL_F2, 16)
TUPLE_ENTRY (RVVMF4x4QI, true, RVVMF4QI, 4, LMUL_F4, 32)
TUPLE_ENTRY (RVVMF8x4QI, TARGET_MIN_VLEN > 32, RVVMF8QI, 4, LMUL_F8, 64)
TUPLE_ENTRY (RVVM2x3QI, true, RVVM2QI, 3, LMUL_2, 4)
TUPLE_ENTRY (RVVM1x3QI, true, RVVM1QI, 3, LMUL_1, 8)
TUPLE_ENTRY (RVVMF2x3QI, true, RVVMF2QI, 3, LMUL_F2, 16)
TUPLE_ENTRY (RVVMF4x3QI, true, RVVMF4QI, 3, LMUL_F4, 32)
TUPLE_ENTRY (RVVMF8x3QI, TARGET_MIN_VLEN > 32, RVVMF8QI, 3, LMUL_F8, 64)
TUPLE_ENTRY (RVVM4x2QI, true, RVVM4QI, 2, LMUL_4, 2)
TUPLE_ENTRY (RVVM2x2QI, true, RVVM2QI, 2, LMUL_2, 4)
TUPLE_ENTRY (RVVM1x2QI, true, RVVM1QI, 2, LMUL_1, 8)
TUPLE_ENTRY (RVVMF2x2QI, true, RVVMF2QI, 2, LMUL_F2, 16)
TUPLE_ENTRY (RVVMF4x2QI, true, RVVMF4QI, 2, LMUL_F4, 32)
TUPLE_ENTRY (RVVMF8x2QI, TARGET_MIN_VLEN > 32, RVVMF8QI, 2, LMUL_F8, 64)
/* SEW = 8. Disable VNx64QImode when TARGET_MIN_VLEN == 32. */
ENTRY (VNx128QI, TARGET_MIN_VLEN >= 128, LMUL_RESERVED, 0, LMUL_RESERVED, 0, LMUL_8, 1)
ENTRY (VNx64QI, TARGET_MIN_VLEN > 32, LMUL_RESERVED, 0, LMUL_8, 1, LMUL_4, 2)
ENTRY (VNx32QI, true, LMUL_8, 1, LMUL_4, 2, LMUL_2, 4)
ENTRY (VNx16QI, true, LMUL_4, 2, LMUL_2, 4, LMUL_1, 8)
ENTRY (VNx8QI, true, LMUL_2, 4, LMUL_1, 8, LMUL_F2, 16)
ENTRY (VNx4QI, true, LMUL_1, 8, LMUL_F2, 16, LMUL_F4, 32)
ENTRY (VNx2QI, true, LMUL_F2, 16, LMUL_F4, 32, LMUL_F8, 64)
ENTRY (VNx1QI, TARGET_MIN_VLEN < 128, LMUL_F4, 32, LMUL_F8, 64, LMUL_RESERVED, 0)
TUPLE_ENTRY (RVVM1x8HI, true, RVVM1HI, 8, LMUL_1, 16)
TUPLE_ENTRY (RVVMF2x8HI, true, RVVMF2HI, 8, LMUL_F2, 32)
TUPLE_ENTRY (RVVMF4x8HI, TARGET_MIN_VLEN > 32, RVVMF4HI, 8, LMUL_F4, 64)
TUPLE_ENTRY (RVVM1x7HI, true, RVVM1HI, 7, LMUL_1, 16)
TUPLE_ENTRY (RVVMF2x7HI, true, RVVMF2HI, 7, LMUL_F2, 32)
TUPLE_ENTRY (RVVMF4x7HI, TARGET_MIN_VLEN > 32, RVVMF4HI, 7, LMUL_F4, 64)
TUPLE_ENTRY (RVVM1x6HI, true, RVVM1HI, 6, LMUL_1, 16)
TUPLE_ENTRY (RVVMF2x6HI, true, RVVMF2HI, 6, LMUL_F2, 32)
TUPLE_ENTRY (RVVMF4x6HI, TARGET_MIN_VLEN > 32, RVVMF4HI, 6, LMUL_F4, 64)
TUPLE_ENTRY (RVVM1x5HI, true, RVVM1HI, 5, LMUL_1, 16)
TUPLE_ENTRY (RVVMF2x5HI, true, RVVMF2HI, 5, LMUL_F2, 32)
TUPLE_ENTRY (RVVMF4x5HI, TARGET_MIN_VLEN > 32, RVVMF4HI, 5, LMUL_F4, 64)
TUPLE_ENTRY (RVVM2x4HI, true, RVVM2HI, 4, LMUL_2, 8)
TUPLE_ENTRY (RVVM1x4HI, true, RVVM1HI, 4, LMUL_1, 16)
TUPLE_ENTRY (RVVMF2x4HI, true, RVVMF2HI, 4, LMUL_F2, 32)
TUPLE_ENTRY (RVVMF4x4HI, TARGET_MIN_VLEN > 32, RVVMF4HI, 4, LMUL_F4, 64)
TUPLE_ENTRY (RVVM2x3HI, true, RVVM2HI, 3, LMUL_2, 8)
TUPLE_ENTRY (RVVM1x3HI, true, RVVM1HI, 3, LMUL_1, 16)
TUPLE_ENTRY (RVVMF2x3HI, true, RVVMF2HI, 3, LMUL_F2, 32)
TUPLE_ENTRY (RVVMF4x3HI, TARGET_MIN_VLEN > 32, RVVMF4HI, 3, LMUL_F4, 64)
TUPLE_ENTRY (RVVM4x2HI, true, RVVM4HI, 2, LMUL_4, 4)
TUPLE_ENTRY (RVVM2x2HI, true, RVVM2HI, 2, LMUL_2, 8)
TUPLE_ENTRY (RVVM1x2HI, true, RVVM1HI, 2, LMUL_1, 16)
TUPLE_ENTRY (RVVMF2x2HI, true, RVVMF2HI, 2, LMUL_F2, 32)
TUPLE_ENTRY (RVVMF4x2HI, TARGET_MIN_VLEN > 32, RVVMF4HI, 2, LMUL_F4, 64)
/* SEW = 16. Disable VNx32HImode when TARGET_MIN_VLEN == 32. */
ENTRY (VNx64HI, TARGET_MIN_VLEN >= 128, LMUL_RESERVED, 0, LMUL_RESERVED, 0, LMUL_8, 2)
ENTRY (VNx32HI, TARGET_MIN_VLEN > 32, LMUL_RESERVED, 0, LMUL_8, 2, LMUL_4, 4)
ENTRY (VNx16HI, true, LMUL_8, 2, LMUL_4, 4, LMUL_2, 8)
ENTRY (VNx8HI, true, LMUL_4, 4, LMUL_2, 8, LMUL_1, 16)
ENTRY (VNx4HI, true, LMUL_2, 8, LMUL_1, 16, LMUL_F2, 32)
ENTRY (VNx2HI, true, LMUL_1, 16, LMUL_F2, 32, LMUL_F4, 64)
ENTRY (VNx1HI, TARGET_MIN_VLEN < 128, LMUL_F2, 32, LMUL_F4, 64, LMUL_RESERVED, 0)
TUPLE_ENTRY (RVVM1x8HF, TARGET_VECTOR_ELEN_FP_16, RVVM1HF, 8, LMUL_1, 16)
TUPLE_ENTRY (RVVMF2x8HF, TARGET_VECTOR_ELEN_FP_16, RVVMF2HF, 8, LMUL_F2, 32)
TUPLE_ENTRY (RVVMF4x8HF, TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32, RVVMF4HF, 8, LMUL_F4, 64)
TUPLE_ENTRY (RVVM1x7HF, TARGET_VECTOR_ELEN_FP_16, RVVM1HF, 7, LMUL_1, 16)
TUPLE_ENTRY (RVVMF2x7HF, TARGET_VECTOR_ELEN_FP_16, RVVMF2HF, 7, LMUL_F2, 32)
TUPLE_ENTRY (RVVMF4x7HF, TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32, RVVMF4HF, 7, LMUL_F4, 64)
TUPLE_ENTRY (RVVM1x6HF, TARGET_VECTOR_ELEN_FP_16, RVVM1HF, 6, LMUL_1, 16)
TUPLE_ENTRY (RVVMF2x6HF, TARGET_VECTOR_ELEN_FP_16, RVVMF2HF, 6, LMUL_F2, 32)
TUPLE_ENTRY (RVVMF4x6HF, TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32, RVVMF4HF, 6, LMUL_F4, 64)
TUPLE_ENTRY (RVVM1x5HF, TARGET_VECTOR_ELEN_FP_16, RVVM1HF, 5, LMUL_1, 16)
TUPLE_ENTRY (RVVMF2x5HF, TARGET_VECTOR_ELEN_FP_16, RVVMF2HF, 5, LMUL_F2, 32)
TUPLE_ENTRY (RVVMF4x5HF, TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32, RVVMF4HF, 5, LMUL_F4, 64)
TUPLE_ENTRY (RVVM2x4HF, TARGET_VECTOR_ELEN_FP_16, RVVM2HF, 4, LMUL_2, 8)
TUPLE_ENTRY (RVVM1x4HF, TARGET_VECTOR_ELEN_FP_16, RVVM1HF, 4, LMUL_1, 16)
TUPLE_ENTRY (RVVMF2x4HF, TARGET_VECTOR_ELEN_FP_16, RVVMF2HF, 4, LMUL_F2, 32)
TUPLE_ENTRY (RVVMF4x4HF, TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32, RVVMF4HF, 4, LMUL_F4, 64)
TUPLE_ENTRY (RVVM2x3HF, TARGET_VECTOR_ELEN_FP_16, RVVM2HF, 3, LMUL_2, 8)
TUPLE_ENTRY (RVVM1x3HF, TARGET_VECTOR_ELEN_FP_16, RVVM1HF, 3, LMUL_1, 16)
TUPLE_ENTRY (RVVMF2x3HF, TARGET_VECTOR_ELEN_FP_16, RVVMF2HF, 3, LMUL_F2, 32)
TUPLE_ENTRY (RVVMF4x3HF, TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32, RVVMF4HF, 3, LMUL_F4, 64)
TUPLE_ENTRY (RVVM4x2HF, TARGET_VECTOR_ELEN_FP_16, RVVM4HF, 2, LMUL_4, 4)
TUPLE_ENTRY (RVVM2x2HF, TARGET_VECTOR_ELEN_FP_16, RVVM2HF, 2, LMUL_2, 8)
TUPLE_ENTRY (RVVM1x2HF, TARGET_VECTOR_ELEN_FP_16, RVVM1HF, 2, LMUL_1, 16)
TUPLE_ENTRY (RVVMF2x2HF, TARGET_VECTOR_ELEN_FP_16, RVVMF2HF, 2, LMUL_F2, 32)
TUPLE_ENTRY (RVVMF4x2HF, TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32, RVVMF4HF, 2, LMUL_F4, 64)
/* SEW = 16 for float point. Enabled when 'zvfh' or 'zvfhmin' is given. */
ENTRY (VNx64HF, TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN >= 128, \
LMUL_RESERVED, 0, LMUL_RESERVED, 0, LMUL_8, 2)
ENTRY (VNx32HF, TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32, \
LMUL_RESERVED, 0, LMUL_8, 2, LMUL_4, 4)
ENTRY (VNx16HF, TARGET_VECTOR_ELEN_FP_16, \
LMUL_8, 2, LMUL_4, 4, LMUL_2, 8)
ENTRY (VNx8HF, TARGET_VECTOR_ELEN_FP_16, \
LMUL_4, 4, LMUL_2, 8, LMUL_1, 16)
ENTRY (VNx4HF, TARGET_VECTOR_ELEN_FP_16, \
LMUL_2, 8, LMUL_1, 16, LMUL_F2, 32)
ENTRY (VNx2HF, TARGET_VECTOR_ELEN_FP_16, \
LMUL_1, 16, LMUL_F2, 32, LMUL_F4, 64)
ENTRY (VNx1HF, TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN < 128, \
LMUL_F2, 32, LMUL_F4, 64, LMUL_RESERVED, 0)
TUPLE_ENTRY (RVVM1x8SI, true, RVVM1SI, 8, LMUL_1, 16)
TUPLE_ENTRY (RVVMF2x8SI, TARGET_MIN_VLEN > 32, RVVMF2SI, 8, LMUL_F2, 32)
TUPLE_ENTRY (RVVM1x7SI, true, RVVM1SI, 7, LMUL_1, 16)
TUPLE_ENTRY (RVVMF2x7SI, TARGET_MIN_VLEN > 32, RVVMF2SI, 7, LMUL_F2, 32)
TUPLE_ENTRY (RVVM1x6SI, true, RVVM1SI, 6, LMUL_1, 16)
TUPLE_ENTRY (RVVMF2x6SI, TARGET_MIN_VLEN > 32, RVVMF2SI, 6, LMUL_F2, 32)
TUPLE_ENTRY (RVVM1x5SI, true, RVVM1SI, 5, LMUL_1, 16)
TUPLE_ENTRY (RVVMF2x5SI, TARGET_MIN_VLEN > 32, RVVMF2SI, 5, LMUL_F2, 32)
TUPLE_ENTRY (RVVM2x4SI, true, RVVM2SI, 4, LMUL_2, 8)
TUPLE_ENTRY (RVVM1x4SI, true, RVVM1SI, 4, LMUL_1, 16)
TUPLE_ENTRY (RVVMF2x4SI, TARGET_MIN_VLEN > 32, RVVMF2SI, 4, LMUL_F2, 32)
TUPLE_ENTRY (RVVM2x3SI, true, RVVM2SI, 3, LMUL_2, 8)
TUPLE_ENTRY (RVVM1x3SI, true, RVVM1SI, 3, LMUL_1, 16)
TUPLE_ENTRY (RVVMF2x3SI, TARGET_MIN_VLEN > 32, RVVMF2SI, 3, LMUL_F2, 32)
TUPLE_ENTRY (RVVM4x2SI, true, RVVM4SI, 2, LMUL_4, 4)
TUPLE_ENTRY (RVVM2x2SI, true, RVVM2SI, 2, LMUL_2, 8)
TUPLE_ENTRY (RVVM1x2SI, true, RVVM1SI, 2, LMUL_1, 16)
TUPLE_ENTRY (RVVMF2x2SI, TARGET_MIN_VLEN > 32, RVVMF2SI, 2, LMUL_F2, 32)
/* SEW = 32. Disable VNx16SImode when TARGET_MIN_VLEN == 32.
For single-precision floating-point, we need TARGET_VECTOR_ELEN_FP_32 to be
true. */
ENTRY (VNx32SI, TARGET_MIN_VLEN >= 128, LMUL_RESERVED, 0, LMUL_RESERVED, 0, LMUL_8, 4)
ENTRY (VNx16SI, TARGET_MIN_VLEN > 32, LMUL_RESERVED, 0, LMUL_8, 4, LMUL_4, 8)
ENTRY (VNx8SI, true, LMUL_8, 4, LMUL_4, 8, LMUL_2, 16)
ENTRY (VNx4SI, true, LMUL_4, 8, LMUL_2, 16, LMUL_1, 32)
ENTRY (VNx2SI, true, LMUL_2, 16, LMUL_1, 32, LMUL_F2, 64)
ENTRY (VNx1SI, TARGET_MIN_VLEN < 128, LMUL_1, 32, LMUL_F2, 64, LMUL_RESERVED, 0)
TUPLE_ENTRY (RVVM1x8SF, TARGET_VECTOR_ELEN_FP_32, RVVM1SF, 8, LMUL_1, 16)
TUPLE_ENTRY (RVVMF2x8SF, TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32, RVVMF2SF, 8, LMUL_F2, 32)
TUPLE_ENTRY (RVVM1x7SF, TARGET_VECTOR_ELEN_FP_32, RVVM1SF, 7, LMUL_1, 16)
TUPLE_ENTRY (RVVMF2x7SF, TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32, RVVMF2SF, 7, LMUL_F2, 32)
TUPLE_ENTRY (RVVM1x6SF, TARGET_VECTOR_ELEN_FP_32, RVVM1SF, 6, LMUL_1, 16)
TUPLE_ENTRY (RVVMF2x6SF, TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32, RVVMF2SF, 6, LMUL_F2, 32)
TUPLE_ENTRY (RVVM1x5SF, TARGET_VECTOR_ELEN_FP_32, RVVM1SF, 5, LMUL_1, 16)
TUPLE_ENTRY (RVVMF2x5SF, TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32, RVVMF2SF, 5, LMUL_F2, 32)
TUPLE_ENTRY (RVVM2x4SF, TARGET_VECTOR_ELEN_FP_32, RVVM2SF, 4, LMUL_2, 8)
TUPLE_ENTRY (RVVM1x4SF, TARGET_VECTOR_ELEN_FP_32, RVVM1SF, 4, LMUL_1, 16)
TUPLE_ENTRY (RVVMF2x4SF, TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32, RVVMF2SF, 4, LMUL_F2, 32)
TUPLE_ENTRY (RVVM2x3SF, TARGET_VECTOR_ELEN_FP_32, RVVM2SF, 3, LMUL_2, 8)
TUPLE_ENTRY (RVVM1x3SF, TARGET_VECTOR_ELEN_FP_32, RVVM1SF, 3, LMUL_1, 16)
TUPLE_ENTRY (RVVMF2x3SF, TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32, RVVMF2SF, 3, LMUL_F2, 32)
TUPLE_ENTRY (RVVM4x2SF, TARGET_VECTOR_ELEN_FP_32, RVVM4SF, 2, LMUL_4, 4)
TUPLE_ENTRY (RVVM2x2SF, TARGET_VECTOR_ELEN_FP_32, RVVM2SF, 2, LMUL_2, 8)
TUPLE_ENTRY (RVVM1x2SF, TARGET_VECTOR_ELEN_FP_32, RVVM1SF, 2, LMUL_1, 16)
TUPLE_ENTRY (RVVMF2x2SF, TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32, RVVMF2SF, 2, LMUL_F2, 32)
ENTRY (VNx32SF, TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN >= 128, LMUL_RESERVED, 0, LMUL_RESERVED, 0, LMUL_8, 4)
ENTRY (VNx16SF, TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32, LMUL_RESERVED, 0,
LMUL_8, 4, LMUL_4, 8)
ENTRY (VNx8SF, TARGET_VECTOR_ELEN_FP_32, LMUL_8, 4, LMUL_4, 8, LMUL_2, 16)
ENTRY (VNx4SF, TARGET_VECTOR_ELEN_FP_32, LMUL_4, 8, LMUL_2, 16, LMUL_1, 32)
ENTRY (VNx2SF, TARGET_VECTOR_ELEN_FP_32, LMUL_2, 16, LMUL_1, 32, LMUL_F2, 64)
ENTRY (VNx1SF, TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN < 128, LMUL_1, 32, LMUL_F2, 64, LMUL_RESERVED, 0)
TUPLE_ENTRY (RVVM1x8DI, TARGET_VECTOR_ELEN_64, RVVM1DI, 8, LMUL_1, 16)
TUPLE_ENTRY (RVVM1x7DI, TARGET_VECTOR_ELEN_64, RVVM1DI, 7, LMUL_1, 16)
TUPLE_ENTRY (RVVM1x6DI, TARGET_VECTOR_ELEN_64, RVVM1DI, 6, LMUL_1, 16)
TUPLE_ENTRY (RVVM1x5DI, TARGET_VECTOR_ELEN_64, RVVM1DI, 5, LMUL_1, 16)
TUPLE_ENTRY (RVVM2x4DI, TARGET_VECTOR_ELEN_64, RVVM2DI, 4, LMUL_2, 8)
TUPLE_ENTRY (RVVM1x4DI, TARGET_VECTOR_ELEN_64, RVVM1DI, 4, LMUL_1, 16)
TUPLE_ENTRY (RVVM2x3DI, TARGET_VECTOR_ELEN_64, RVVM2DI, 3, LMUL_2, 8)
TUPLE_ENTRY (RVVM1x3DI, TARGET_VECTOR_ELEN_64, RVVM1DI, 3, LMUL_1, 16)
TUPLE_ENTRY (RVVM4x2DI, TARGET_VECTOR_ELEN_64, RVVM4DI, 2, LMUL_4, 4)
TUPLE_ENTRY (RVVM2x2DI, TARGET_VECTOR_ELEN_64, RVVM2DI, 2, LMUL_2, 8)
TUPLE_ENTRY (RVVM1x2DI, TARGET_VECTOR_ELEN_64, RVVM1DI, 2, LMUL_1, 16)
/* SEW = 64. Enable when TARGET_VECTOR_ELEN_64 is true.
For double-precision floating-point, we need TARGET_VECTOR_ELEN_FP_64 to be
true. */
ENTRY (VNx16DI, TARGET_VECTOR_ELEN_64 && TARGET_MIN_VLEN >= 128, LMUL_RESERVED, 0, LMUL_RESERVED, 0, LMUL_8, 8)
ENTRY (VNx8DI, TARGET_VECTOR_ELEN_64, LMUL_RESERVED, 0, LMUL_8, 8, LMUL_4, 16)
ENTRY (VNx4DI, TARGET_VECTOR_ELEN_64, LMUL_RESERVED, 0, LMUL_4, 16, LMUL_2, 32)
ENTRY (VNx2DI, TARGET_VECTOR_ELEN_64, LMUL_RESERVED, 0, LMUL_2, 32, LMUL_1, 64)
ENTRY (VNx1DI, TARGET_VECTOR_ELEN_64 && TARGET_MIN_VLEN < 128, LMUL_RESERVED, 0, LMUL_1, 64, LMUL_RESERVED, 0)
TUPLE_ENTRY (RVVM1x8DF, TARGET_VECTOR_ELEN_FP_64, RVVM1DF, 8, LMUL_1, 16)
TUPLE_ENTRY (RVVM1x7DF, TARGET_VECTOR_ELEN_FP_64, RVVM1DF, 7, LMUL_1, 16)
TUPLE_ENTRY (RVVM1x6DF, TARGET_VECTOR_ELEN_FP_64, RVVM1DF, 6, LMUL_1, 16)
TUPLE_ENTRY (RVVM1x5DF, TARGET_VECTOR_ELEN_FP_64, RVVM1DF, 5, LMUL_1, 16)
TUPLE_ENTRY (RVVM2x4DF, TARGET_VECTOR_ELEN_FP_64, RVVM2DF, 4, LMUL_2, 8)
TUPLE_ENTRY (RVVM1x4DF, TARGET_VECTOR_ELEN_FP_64, RVVM1DF, 4, LMUL_1, 16)
TUPLE_ENTRY (RVVM2x3DF, TARGET_VECTOR_ELEN_FP_64, RVVM2DF, 3, LMUL_2, 8)
TUPLE_ENTRY (RVVM1x3DF, TARGET_VECTOR_ELEN_FP_64, RVVM1DF, 3, LMUL_1, 16)
TUPLE_ENTRY (RVVM4x2DF, TARGET_VECTOR_ELEN_FP_64, RVVM4DF, 2, LMUL_4, 4)
TUPLE_ENTRY (RVVM2x2DF, TARGET_VECTOR_ELEN_FP_64, RVVM2DF, 2, LMUL_2, 8)
TUPLE_ENTRY (RVVM1x2DF, TARGET_VECTOR_ELEN_FP_64, RVVM1DF, 2, LMUL_1, 16)
ENTRY (VNx16DF, TARGET_VECTOR_ELEN_FP_64 && TARGET_MIN_VLEN >= 128, LMUL_RESERVED, 0, LMUL_RESERVED, 0, LMUL_8, 8)
ENTRY (VNx8DF, TARGET_VECTOR_ELEN_FP_64 && TARGET_MIN_VLEN > 32, LMUL_RESERVED, 0,
LMUL_8, 8, LMUL_4, 16)
ENTRY (VNx4DF, TARGET_VECTOR_ELEN_FP_64, LMUL_RESERVED, 0, LMUL_4, 16, LMUL_2, 32)
ENTRY (VNx2DF, TARGET_VECTOR_ELEN_FP_64, LMUL_RESERVED, 0, LMUL_2, 32, LMUL_1, 64)
ENTRY (VNx1DF, TARGET_VECTOR_ELEN_FP_64 && TARGET_MIN_VLEN < 128, LMUL_RESERVED, 0, LMUL_1, 64, LMUL_RESERVED, 0)
/* Enable or disable the tuple type. BASE_MODE is the base vector mode of the
tuple mode. For example, the BASE_MODE of VNx2x1SImode is VNx1SImode. ALL
tuple modes should always satisfy NF * BASE_MODE LMUL <= 8. */
/* Tuple modes for EEW = 8. */
TUPLE_ENTRY (VNx2x64QI, TARGET_MIN_VLEN >= 128, VNx64QI, 2, LMUL_RESERVED, 0, LMUL_RESERVED, 0, LMUL_4, 2)
TUPLE_ENTRY (VNx2x32QI, TARGET_MIN_VLEN >= 64, VNx32QI, 2, LMUL_RESERVED, 0, LMUL_4, 2, LMUL_2, 4)
TUPLE_ENTRY (VNx3x32QI, TARGET_MIN_VLEN >= 128, VNx32QI, 3, LMUL_RESERVED, 0, LMUL_RESERVED, 0, LMUL_2, 4)
TUPLE_ENTRY (VNx4x32QI, TARGET_MIN_VLEN >= 128, VNx32QI, 4, LMUL_RESERVED, 0, LMUL_RESERVED, 0, LMUL_2, 4)
TUPLE_ENTRY (VNx2x16QI, true, VNx16QI, 2, LMUL_4, 2, LMUL_2, 4, LMUL_1, 8)
TUPLE_ENTRY (VNx3x16QI, TARGET_MIN_VLEN >= 64, VNx16QI, 3, LMUL_RESERVED, 0, LMUL_2, 4, LMUL_1, 8)
TUPLE_ENTRY (VNx4x16QI, TARGET_MIN_VLEN >= 64, VNx16QI, 4, LMUL_RESERVED, 0, LMUL_2, 4, LMUL_1, 8)
TUPLE_ENTRY (VNx5x16QI, TARGET_MIN_VLEN >= 128, VNx16QI, 5, LMUL_RESERVED, 0, LMUL_RESERVED, 0, LMUL_1, 8)
TUPLE_ENTRY (VNx6x16QI, TARGET_MIN_VLEN >= 128, VNx16QI, 6, LMUL_RESERVED, 0, LMUL_RESERVED, 0, LMUL_1, 8)
TUPLE_ENTRY (VNx7x16QI, TARGET_MIN_VLEN >= 128, VNx16QI, 7, LMUL_RESERVED, 0, LMUL_RESERVED, 0, LMUL_1, 8)
TUPLE_ENTRY (VNx8x16QI, TARGET_MIN_VLEN >= 128, VNx16QI, 8, LMUL_RESERVED, 0, LMUL_RESERVED, 0, LMUL_1, 8)
TUPLE_ENTRY (VNx2x8QI, true, VNx8QI, 2, LMUL_2, 4, LMUL_1, 8, LMUL_F2, 16)
TUPLE_ENTRY (VNx3x8QI, true, VNx8QI, 3, LMUL_2, 4, LMUL_1, 8, LMUL_F2, 16)
TUPLE_ENTRY (VNx4x8QI, true, VNx8QI, 4, LMUL_2, 4, LMUL_1, 8, LMUL_F2, 16)
TUPLE_ENTRY (VNx5x8QI, TARGET_MIN_VLEN >= 64, VNx8QI, 5, LMUL_RESERVED, 0, LMUL_1, 8, LMUL_F2, 16)
TUPLE_ENTRY (VNx6x8QI, TARGET_MIN_VLEN >= 64, VNx8QI, 6, LMUL_RESERVED, 0, LMUL_1, 8, LMUL_F2, 16)
TUPLE_ENTRY (VNx7x8QI, TARGET_MIN_VLEN >= 64, VNx8QI, 7, LMUL_RESERVED, 0, LMUL_1, 8, LMUL_F2, 16)
TUPLE_ENTRY (VNx8x8QI, TARGET_MIN_VLEN >= 64, VNx8QI, 8, LMUL_RESERVED, 0, LMUL_1, 8, LMUL_F2, 16)
TUPLE_ENTRY (VNx2x4QI, true, VNx4QI, 2, LMUL_1, 8, LMUL_F2, 16, LMUL_F4, 32)
TUPLE_ENTRY (VNx3x4QI, true, VNx4QI, 3, LMUL_1, 8, LMUL_F2, 16, LMUL_F4, 32)
TUPLE_ENTRY (VNx4x4QI, true, VNx4QI, 4, LMUL_1, 8, LMUL_F2, 16, LMUL_F4, 32)
TUPLE_ENTRY (VNx5x4QI, true, VNx4QI, 5, LMUL_1, 8, LMUL_F2, 16, LMUL_F4, 32)
TUPLE_ENTRY (VNx6x4QI, true, VNx4QI, 6, LMUL_1, 8, LMUL_F2, 16, LMUL_F4, 32)
TUPLE_ENTRY (VNx7x4QI, true, VNx4QI, 7, LMUL_1, 8, LMUL_F2, 16, LMUL_F4, 32)
TUPLE_ENTRY (VNx8x4QI, true, VNx4QI, 8, LMUL_1, 8, LMUL_F2, 16, LMUL_F4, 32)
TUPLE_ENTRY (VNx2x2QI, true, VNx2QI, 2, LMUL_F2, 16, LMUL_F4, 32, LMUL_F8, 64)
TUPLE_ENTRY (VNx3x2QI, true, VNx2QI, 3, LMUL_F2, 16, LMUL_F4, 32, LMUL_F8, 64)
TUPLE_ENTRY (VNx4x2QI, true, VNx2QI, 4, LMUL_F2, 16, LMUL_F4, 32, LMUL_F8, 64)
TUPLE_ENTRY (VNx5x2QI, true, VNx2QI, 5, LMUL_F2, 16, LMUL_F4, 32, LMUL_F8, 64)
TUPLE_ENTRY (VNx6x2QI, true, VNx2QI, 6, LMUL_F2, 16, LMUL_F4, 32, LMUL_F8, 64)
TUPLE_ENTRY (VNx7x2QI, true, VNx2QI, 7, LMUL_F2, 16, LMUL_F4, 32, LMUL_F8, 64)
TUPLE_ENTRY (VNx8x2QI, true, VNx2QI, 8, LMUL_F2, 16, LMUL_F4, 32, LMUL_F8, 64)
TUPLE_ENTRY (VNx2x1QI, TARGET_MIN_VLEN < 128, VNx1QI, 2, LMUL_F4, 32, LMUL_F8, 64, LMUL_RESERVED, 0)
TUPLE_ENTRY (VNx3x1QI, TARGET_MIN_VLEN < 128, VNx1QI, 3, LMUL_F4, 32, LMUL_F8, 64, LMUL_RESERVED, 0)
TUPLE_ENTRY (VNx4x1QI, TARGET_MIN_VLEN < 128, VNx1QI, 4, LMUL_F4, 32, LMUL_F8, 64, LMUL_RESERVED, 0)
TUPLE_ENTRY (VNx5x1QI, TARGET_MIN_VLEN < 128, VNx1QI, 5, LMUL_F4, 32, LMUL_F8, 64, LMUL_RESERVED, 0)
TUPLE_ENTRY (VNx6x1QI, TARGET_MIN_VLEN < 128, VNx1QI, 6, LMUL_F4, 32, LMUL_F8, 64, LMUL_RESERVED, 0)
TUPLE_ENTRY (VNx7x1QI, TARGET_MIN_VLEN < 128, VNx1QI, 7, LMUL_F4, 32, LMUL_F8, 64, LMUL_RESERVED, 0)
TUPLE_ENTRY (VNx8x1QI, TARGET_MIN_VLEN < 128, VNx1QI, 8, LMUL_F4, 32, LMUL_F8, 64, LMUL_RESERVED, 0)
/* Tuple modes for EEW = 16. */
TUPLE_ENTRY (VNx2x32HI, TARGET_MIN_VLEN >= 128, VNx32HI, 2, LMUL_RESERVED, 0, LMUL_RESERVED, 0, LMUL_4, 4)
TUPLE_ENTRY (VNx2x16HI, TARGET_MIN_VLEN >= 64, VNx16HI, 2, LMUL_RESERVED, 0, LMUL_4, 4, LMUL_2, 8)
TUPLE_ENTRY (VNx3x16HI, TARGET_MIN_VLEN >= 128, VNx16HI, 3, LMUL_RESERVED, 0, LMUL_RESERVED, 0, LMUL_2, 8)
TUPLE_ENTRY (VNx4x16HI, TARGET_MIN_VLEN >= 128, VNx16HI, 4, LMUL_RESERVED, 0, LMUL_RESERVED, 0, LMUL_2, 8)
TUPLE_ENTRY (VNx2x8HI, true, VNx8HI, 2, LMUL_4, 4, LMUL_2, 8, LMUL_1, 16)
TUPLE_ENTRY (VNx3x8HI, TARGET_MIN_VLEN >= 64, VNx8HI, 3, LMUL_RESERVED, 0, LMUL_2, 8, LMUL_1, 16)
TUPLE_ENTRY (VNx4x8HI, TARGET_MIN_VLEN >= 64, VNx8HI, 4, LMUL_RESERVED, 0, LMUL_2, 8, LMUL_1, 16)
TUPLE_ENTRY (VNx5x8HI, TARGET_MIN_VLEN >= 128, VNx8HI, 5, LMUL_RESERVED, 0, LMUL_RESERVED, 0, LMUL_1, 16)
TUPLE_ENTRY (VNx6x8HI, TARGET_MIN_VLEN >= 128, VNx8HI, 6, LMUL_RESERVED, 0, LMUL_RESERVED, 0, LMUL_1, 16)
TUPLE_ENTRY (VNx7x8HI, TARGET_MIN_VLEN >= 128, VNx8HI, 7, LMUL_RESERVED, 0, LMUL_RESERVED, 0, LMUL_1, 16)
TUPLE_ENTRY (VNx8x8HI, TARGET_MIN_VLEN >= 128, VNx8HI, 8, LMUL_RESERVED, 0, LMUL_RESERVED, 0, LMUL_1, 16)
TUPLE_ENTRY (VNx2x4HI, true, VNx4HI, 2, LMUL_2, 8, LMUL_1, 16, LMUL_F2, 32)
TUPLE_ENTRY (VNx3x4HI, true, VNx4HI, 3, LMUL_2, 8, LMUL_1, 16, LMUL_F2, 32)
TUPLE_ENTRY (VNx4x4HI, true, VNx4HI, 4, LMUL_2, 8, LMUL_1, 16, LMUL_F2, 32)
TUPLE_ENTRY (VNx5x4HI, TARGET_MIN_VLEN >= 64, VNx4HI, 5, LMUL_RESERVED, 0, LMUL_1, 16, LMUL_F2, 32)
TUPLE_ENTRY (VNx6x4HI, TARGET_MIN_VLEN >= 64, VNx4HI, 6, LMUL_RESERVED, 0, LMUL_1, 16, LMUL_F2, 32)
TUPLE_ENTRY (VNx7x4HI, TARGET_MIN_VLEN >= 64, VNx4HI, 7, LMUL_RESERVED, 0, LMUL_1, 16, LMUL_F2, 32)
TUPLE_ENTRY (VNx8x4HI, TARGET_MIN_VLEN >= 64, VNx4HI, 8, LMUL_RESERVED, 0, LMUL_1, 16, LMUL_F2, 32)
TUPLE_ENTRY (VNx2x2HI, true, VNx2HI, 2, LMUL_1, 16, LMUL_F2, 32, LMUL_F4, 64)
TUPLE_ENTRY (VNx3x2HI, true, VNx2HI, 3, LMUL_1, 16, LMUL_F2, 32, LMUL_F4, 64)
TUPLE_ENTRY (VNx4x2HI, true, VNx2HI, 4, LMUL_1, 16, LMUL_F2, 32, LMUL_F4, 64)
TUPLE_ENTRY (VNx5x2HI, true, VNx2HI, 5, LMUL_1, 16, LMUL_F2, 32, LMUL_F4, 64)
TUPLE_ENTRY (VNx6x2HI, true, VNx2HI, 6, LMUL_1, 16, LMUL_F2, 32, LMUL_F4, 64)
TUPLE_ENTRY (VNx7x2HI, true, VNx2HI, 7, LMUL_1, 16, LMUL_F2, 32, LMUL_F4, 64)
TUPLE_ENTRY (VNx8x2HI, true, VNx2HI, 8, LMUL_1, 16, LMUL_F2, 32, LMUL_F4, 64)
TUPLE_ENTRY (VNx2x1HI, TARGET_MIN_VLEN < 128, VNx1HI, 2, LMUL_F2, 32, LMUL_F4, 64, LMUL_RESERVED, 0)
TUPLE_ENTRY (VNx3x1HI, TARGET_MIN_VLEN < 128, VNx1HI, 3, LMUL_F2, 32, LMUL_F4, 64, LMUL_RESERVED, 0)
TUPLE_ENTRY (VNx4x1HI, TARGET_MIN_VLEN < 128, VNx1HI, 4, LMUL_F2, 32, LMUL_F4, 64, LMUL_RESERVED, 0)
TUPLE_ENTRY (VNx5x1HI, TARGET_MIN_VLEN < 128, VNx1HI, 5, LMUL_F2, 32, LMUL_F4, 64, LMUL_RESERVED, 0)
TUPLE_ENTRY (VNx6x1HI, TARGET_MIN_VLEN < 128, VNx1HI, 6, LMUL_F2, 32, LMUL_F4, 64, LMUL_RESERVED, 0)
TUPLE_ENTRY (VNx7x1HI, TARGET_MIN_VLEN < 128, VNx1HI, 7, LMUL_F2, 32, LMUL_F4, 64, LMUL_RESERVED, 0)
TUPLE_ENTRY (VNx8x1HI, TARGET_MIN_VLEN < 128, VNx1HI, 8, LMUL_F2, 32, LMUL_F4, 64, LMUL_RESERVED, 0)
TUPLE_ENTRY (VNx2x32HF, TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN >= 128, VNx32HF, 2, LMUL_RESERVED, 0, LMUL_RESERVED, 0, LMUL_4, 4)
TUPLE_ENTRY (VNx2x16HF, TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN >= 64, VNx16HF, 2, LMUL_RESERVED, 0, LMUL_4, 4, LMUL_2, 8)
TUPLE_ENTRY (VNx3x16HF, TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN >= 128, VNx16HF, 3, LMUL_RESERVED, 0, LMUL_RESERVED, 0, LMUL_2, 8)
TUPLE_ENTRY (VNx4x16HF, TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN >= 128, VNx16HF, 4, LMUL_RESERVED, 0, LMUL_RESERVED, 0, LMUL_2, 8)
TUPLE_ENTRY (VNx2x8HF, TARGET_VECTOR_ELEN_FP_16, VNx8HF, 2, LMUL_4, 4, LMUL_2, 8, LMUL_1, 16)
TUPLE_ENTRY (VNx3x8HF, TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN >= 64, VNx8HF, 3, LMUL_RESERVED, 0, LMUL_2, 8, LMUL_1, 16)
TUPLE_ENTRY (VNx4x8HF, TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN >= 64, VNx8HF, 4, LMUL_RESERVED, 0, LMUL_2, 8, LMUL_1, 16)
TUPLE_ENTRY (VNx5x8HF, TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN >= 128, VNx8HF, 5, LMUL_RESERVED, 0, LMUL_RESERVED, 0, LMUL_1, 16)
TUPLE_ENTRY (VNx6x8HF, TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN >= 128, VNx8HF, 6, LMUL_RESERVED, 0, LMUL_RESERVED, 0, LMUL_1, 16)
TUPLE_ENTRY (VNx7x8HF, TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN >= 128, VNx8HF, 7, LMUL_RESERVED, 0, LMUL_RESERVED, 0, LMUL_1, 16)
TUPLE_ENTRY (VNx8x8HF, TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN >= 128, VNx8HF, 8, LMUL_RESERVED, 0, LMUL_RESERVED, 0, LMUL_1, 16)
TUPLE_ENTRY (VNx2x4HF, TARGET_VECTOR_ELEN_FP_16, VNx4HF, 2, LMUL_2, 8, LMUL_1, 16, LMUL_F2, 32)
TUPLE_ENTRY (VNx3x4HF, TARGET_VECTOR_ELEN_FP_16, VNx4HF, 3, LMUL_2, 8, LMUL_1, 16, LMUL_F2, 32)
TUPLE_ENTRY (VNx4x4HF, TARGET_VECTOR_ELEN_FP_16, VNx4HF, 4, LMUL_2, 8, LMUL_1, 16, LMUL_F2, 32)
TUPLE_ENTRY (VNx5x4HF, TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN >= 64, VNx4HF, 5, LMUL_RESERVED, 0, LMUL_1, 16, LMUL_F2, 32)
TUPLE_ENTRY (VNx6x4HF, TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN >= 64, VNx4HF, 6, LMUL_RESERVED, 0, LMUL_1, 16, LMUL_F2, 32)
TUPLE_ENTRY (VNx7x4HF, TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN >= 64, VNx4HF, 7, LMUL_RESERVED, 0, LMUL_1, 16, LMUL_F2, 32)
TUPLE_ENTRY (VNx8x4HF, TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN >= 64, VNx4HF, 8, LMUL_RESERVED, 0, LMUL_1, 16, LMUL_F2, 32)
TUPLE_ENTRY (VNx2x2HF, TARGET_VECTOR_ELEN_FP_16, VNx2HF, 2, LMUL_1, 16, LMUL_F2, 32, LMUL_F4, 64)
TUPLE_ENTRY (VNx3x2HF, TARGET_VECTOR_ELEN_FP_16, VNx2HF, 3, LMUL_1, 16, LMUL_F2, 32, LMUL_F4, 64)
TUPLE_ENTRY (VNx4x2HF, TARGET_VECTOR_ELEN_FP_16, VNx2HF, 4, LMUL_1, 16, LMUL_F2, 32, LMUL_F4, 64)
TUPLE_ENTRY (VNx5x2HF, TARGET_VECTOR_ELEN_FP_16, VNx2HF, 5, LMUL_1, 16, LMUL_F2, 32, LMUL_F4, 64)
TUPLE_ENTRY (VNx6x2HF, TARGET_VECTOR_ELEN_FP_16, VNx2HF, 6, LMUL_1, 16, LMUL_F2, 32, LMUL_F4, 64)
TUPLE_ENTRY (VNx7x2HF, TARGET_VECTOR_ELEN_FP_16, VNx2HF, 7, LMUL_1, 16, LMUL_F2, 32, LMUL_F4, 64)
TUPLE_ENTRY (VNx8x2HF, TARGET_VECTOR_ELEN_FP_16, VNx2HF, 8, LMUL_1, 16, LMUL_F2, 32, LMUL_F4, 64)
TUPLE_ENTRY (VNx2x1HF, TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN < 128, VNx1HF, 2, LMUL_F2, 32, LMUL_F4, 64, LMUL_RESERVED, 0)
TUPLE_ENTRY (VNx3x1HF, TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN < 128, VNx1HF, 3, LMUL_F2, 32, LMUL_F4, 64, LMUL_RESERVED, 0)
TUPLE_ENTRY (VNx4x1HF, TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN < 128, VNx1HF, 4, LMUL_F2, 32, LMUL_F4, 64, LMUL_RESERVED, 0)
TUPLE_ENTRY (VNx5x1HF, TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN < 128, VNx1HF, 5, LMUL_F2, 32, LMUL_F4, 64, LMUL_RESERVED, 0)
TUPLE_ENTRY (VNx6x1HF, TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN < 128, VNx1HF, 6, LMUL_F2, 32, LMUL_F4, 64, LMUL_RESERVED, 0)
TUPLE_ENTRY (VNx7x1HF, TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN < 128, VNx1HF, 7, LMUL_F2, 32, LMUL_F4, 64, LMUL_RESERVED, 0)
TUPLE_ENTRY (VNx8x1HF, TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN < 128, VNx1HF, 8, LMUL_F2, 32, LMUL_F4, 64, LMUL_RESERVED, 0)
/* Tuple modes for EEW = 32. */
TUPLE_ENTRY (VNx2x16SI, TARGET_MIN_VLEN >= 128, VNx16SI, 2, LMUL_RESERVED, 0, LMUL_RESERVED, 0, LMUL_4, 8)
TUPLE_ENTRY (VNx2x8SI, TARGET_MIN_VLEN >= 64, VNx8SI, 2, LMUL_RESERVED, 0, LMUL_4, 8, LMUL_2, 16)
TUPLE_ENTRY (VNx3x8SI, TARGET_MIN_VLEN >= 128, VNx8SI, 3, LMUL_RESERVED, 0, LMUL_RESERVED, 0, LMUL_2, 16)
TUPLE_ENTRY (VNx4x8SI, TARGET_MIN_VLEN >= 128, VNx8SI, 4, LMUL_RESERVED, 0, LMUL_RESERVED, 0, LMUL_2, 16)
TUPLE_ENTRY (VNx2x4SI, true, VNx4SI, 2, LMUL_4, 8, LMUL_2, 16, LMUL_1, 32)
TUPLE_ENTRY (VNx3x4SI, TARGET_MIN_VLEN >= 64, VNx4SI, 3, LMUL_RESERVED, 0, LMUL_2, 16, LMUL_1, 32)
TUPLE_ENTRY (VNx4x4SI, TARGET_MIN_VLEN >= 64, VNx4SI, 4, LMUL_RESERVED, 0, LMUL_2, 16, LMUL_1, 32)
TUPLE_ENTRY (VNx5x4SI, TARGET_MIN_VLEN >= 128, VNx4SI, 5, LMUL_RESERVED, 0, LMUL_RESERVED, 0, LMUL_1, 32)
TUPLE_ENTRY (VNx6x4SI, TARGET_MIN_VLEN >= 128, VNx4SI, 6, LMUL_RESERVED, 0, LMUL_RESERVED, 0, LMUL_1, 32)
TUPLE_ENTRY (VNx7x4SI, TARGET_MIN_VLEN >= 128, VNx4SI, 7, LMUL_RESERVED, 0, LMUL_RESERVED, 0, LMUL_1, 32)
TUPLE_ENTRY (VNx8x4SI, TARGET_MIN_VLEN >= 128, VNx4SI, 8, LMUL_RESERVED, 0, LMUL_RESERVED, 0, LMUL_1, 32)
TUPLE_ENTRY (VNx2x2SI, true, VNx2SI, 2, LMUL_2, 16, LMUL_1, 32, LMUL_F2, 64)
TUPLE_ENTRY (VNx3x2SI, true, VNx2SI, 3, LMUL_2, 16, LMUL_1, 32, LMUL_F2, 64)
TUPLE_ENTRY (VNx4x2SI, true, VNx2SI, 4, LMUL_2, 16, LMUL_1, 32, LMUL_F2, 64)
TUPLE_ENTRY (VNx5x2SI, TARGET_MIN_VLEN >= 64, VNx2SI, 5, LMUL_RESERVED, 0, LMUL_1, 32, LMUL_F2, 64)
TUPLE_ENTRY (VNx6x2SI, TARGET_MIN_VLEN >= 64, VNx2SI, 6, LMUL_RESERVED, 0, LMUL_1, 32, LMUL_F2, 64)
TUPLE_ENTRY (VNx7x2SI, TARGET_MIN_VLEN >= 64, VNx2SI, 7, LMUL_RESERVED, 0, LMUL_1, 32, LMUL_F2, 64)
TUPLE_ENTRY (VNx8x2SI, TARGET_MIN_VLEN >= 64, VNx2SI, 8, LMUL_RESERVED, 0, LMUL_1, 32, LMUL_F2, 64)
TUPLE_ENTRY (VNx2x1SI, TARGET_MIN_VLEN < 128, VNx1SI, 2, LMUL_1, 32, LMUL_F2, 64, LMUL_RESERVED, 0)
TUPLE_ENTRY (VNx3x1SI, TARGET_MIN_VLEN < 128, VNx1SI, 3, LMUL_1, 32, LMUL_F2, 64, LMUL_RESERVED, 0)
TUPLE_ENTRY (VNx4x1SI, TARGET_MIN_VLEN < 128, VNx1SI, 4, LMUL_1, 32, LMUL_F2, 64, LMUL_RESERVED, 0)
TUPLE_ENTRY (VNx5x1SI, TARGET_MIN_VLEN < 128, VNx1SI, 5, LMUL_1, 32, LMUL_F2, 64, LMUL_RESERVED, 0)
TUPLE_ENTRY (VNx6x1SI, TARGET_MIN_VLEN < 128, VNx1SI, 6, LMUL_1, 32, LMUL_F2, 64, LMUL_RESERVED, 0)
TUPLE_ENTRY (VNx7x1SI, TARGET_MIN_VLEN < 128, VNx1SI, 7, LMUL_1, 32, LMUL_F2, 64, LMUL_RESERVED, 0)
TUPLE_ENTRY (VNx8x1SI, TARGET_MIN_VLEN < 128, VNx1SI, 8, LMUL_1, 32, LMUL_F2, 64, LMUL_RESERVED, 0)
TUPLE_ENTRY (VNx2x16SF, TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32, VNx16SF, 2, LMUL_RESERVED, 0, LMUL_RESERVED, 0, LMUL_4, 8)
TUPLE_ENTRY (VNx2x8SF, TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN >= 64, VNx8SF, 2, LMUL_RESERVED, 0, LMUL_4, 8, LMUL_2, 16)
TUPLE_ENTRY (VNx3x8SF, TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN >= 128, VNx8SF, 3, LMUL_RESERVED, 0, LMUL_RESERVED, 0, LMUL_2, 16)
TUPLE_ENTRY (VNx4x8SF, TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN >= 128, VNx8SF, 4, LMUL_RESERVED, 0, LMUL_RESERVED, 0, LMUL_2, 16)
TUPLE_ENTRY (VNx2x4SF, TARGET_VECTOR_ELEN_FP_32, VNx4SF, 2, LMUL_4, 8, LMUL_2, 16, LMUL_1, 32)
TUPLE_ENTRY (VNx3x4SF, TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN >= 64, VNx4SF, 3, LMUL_RESERVED, 0, LMUL_2, 16, LMUL_1, 32)
TUPLE_ENTRY (VNx4x4SF, TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN >= 64, VNx4SF, 4, LMUL_RESERVED, 0, LMUL_2, 16, LMUL_1, 32)
TUPLE_ENTRY (VNx5x4SF, TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN >= 128, VNx4SF, 5, LMUL_RESERVED, 0, LMUL_RESERVED, 0, LMUL_1, 32)
TUPLE_ENTRY (VNx6x4SF, TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN >= 128, VNx4SF, 6, LMUL_RESERVED, 0, LMUL_RESERVED, 0, LMUL_1, 32)
TUPLE_ENTRY (VNx7x4SF, TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN >= 128, VNx4SF, 7, LMUL_RESERVED, 0, LMUL_RESERVED, 0, LMUL_1, 32)
TUPLE_ENTRY (VNx8x4SF, TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN >= 128, VNx4SF, 8, LMUL_RESERVED, 0, LMUL_RESERVED, 0, LMUL_1, 32)
TUPLE_ENTRY (VNx2x2SF, TARGET_VECTOR_ELEN_FP_32, VNx2SF, 2, LMUL_2, 16, LMUL_1, 32, LMUL_F2, 64)
TUPLE_ENTRY (VNx3x2SF, TARGET_VECTOR_ELEN_FP_32, VNx2SF, 3, LMUL_2, 16, LMUL_1, 32, LMUL_F2, 64)
TUPLE_ENTRY (VNx4x2SF, TARGET_VECTOR_ELEN_FP_32, VNx2SF, 4, LMUL_2, 16, LMUL_1, 32, LMUL_F2, 64)
TUPLE_ENTRY (VNx5x2SF, TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN >= 64, VNx2SF, 5, LMUL_RESERVED, 0, LMUL_1, 32, LMUL_F2, 64)
TUPLE_ENTRY (VNx6x2SF, TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN >= 64, VNx2SF, 6, LMUL_RESERVED, 0, LMUL_1, 32, LMUL_F2, 64)
TUPLE_ENTRY (VNx7x2SF, TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN >= 64, VNx2SF, 7, LMUL_RESERVED, 0, LMUL_1, 32, LMUL_F2, 64)
TUPLE_ENTRY (VNx8x2SF, TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN >= 64, VNx2SF, 8, LMUL_RESERVED, 0, LMUL_1, 32, LMUL_F2, 64)
TUPLE_ENTRY (VNx2x1SF, TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN < 128, VNx1SF, 2, LMUL_1, 32, LMUL_F2, 64, LMUL_RESERVED, 0)
TUPLE_ENTRY (VNx3x1SF, TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN < 128, VNx1SF, 3, LMUL_1, 32, LMUL_F2, 64, LMUL_RESERVED, 0)
TUPLE_ENTRY (VNx4x1SF, TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN < 128, VNx1SF, 4, LMUL_1, 32, LMUL_F2, 64, LMUL_RESERVED, 0)
TUPLE_ENTRY (VNx5x1SF, TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN < 128, VNx1SF, 5, LMUL_1, 32, LMUL_F2, 64, LMUL_RESERVED, 0)
TUPLE_ENTRY (VNx6x1SF, TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN < 128, VNx1SF, 6, LMUL_1, 32, LMUL_F2, 64, LMUL_RESERVED, 0)
TUPLE_ENTRY (VNx7x1SF, TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN < 128, VNx1SF, 7, LMUL_1, 32, LMUL_F2, 64, LMUL_RESERVED, 0)
TUPLE_ENTRY (VNx8x1SF, TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN < 128, VNx1SF, 8, LMUL_1, 32, LMUL_F2, 64, LMUL_RESERVED, 0)
/* Tuple modes for EEW = 64. */
TUPLE_ENTRY (VNx2x8DI, TARGET_VECTOR_ELEN_64 && TARGET_MIN_VLEN >= 128, VNx8DI, 2, LMUL_RESERVED, 0, LMUL_RESERVED, 0, LMUL_4, 16)
TUPLE_ENTRY (VNx2x4DI, TARGET_VECTOR_ELEN_64, VNx4DI, 2, LMUL_RESERVED, 0, LMUL_4, 16, LMUL_2, 32)
TUPLE_ENTRY (VNx3x4DI, TARGET_VECTOR_ELEN_64 && TARGET_MIN_VLEN >= 128, VNx4DI, 3, LMUL_RESERVED, 0, LMUL_RESERVED, 0, LMUL_2, 32)
TUPLE_ENTRY (VNx4x4DI, TARGET_VECTOR_ELEN_64 && TARGET_MIN_VLEN >= 128, VNx4DI, 4, LMUL_RESERVED, 0, LMUL_RESERVED, 0, LMUL_2, 32)
TUPLE_ENTRY (VNx2x2DI, TARGET_VECTOR_ELEN_64, VNx2DI, 2, LMUL_RESERVED, 0, LMUL_2, 32, LMUL_1, 64)
TUPLE_ENTRY (VNx3x2DI, TARGET_VECTOR_ELEN_64, VNx2DI, 3, LMUL_RESERVED, 0, LMUL_2, 32, LMUL_1, 64)
TUPLE_ENTRY (VNx4x2DI, TARGET_VECTOR_ELEN_64, VNx2DI, 4, LMUL_RESERVED, 0, LMUL_2, 32, LMUL_1, 64)
TUPLE_ENTRY (VNx5x2DI, TARGET_VECTOR_ELEN_64 && TARGET_MIN_VLEN >= 128, VNx2DI, 5, LMUL_RESERVED, 0, LMUL_RESERVED, 0, LMUL_1, 64)
TUPLE_ENTRY (VNx6x2DI, TARGET_VECTOR_ELEN_64 && TARGET_MIN_VLEN >= 128, VNx2DI, 6, LMUL_RESERVED, 0, LMUL_RESERVED, 0, LMUL_1, 64)
TUPLE_ENTRY (VNx7x2DI, TARGET_VECTOR_ELEN_64 && TARGET_MIN_VLEN >= 128, VNx2DI, 7, LMUL_RESERVED, 0, LMUL_RESERVED, 0, LMUL_1, 64)
TUPLE_ENTRY (VNx8x2DI, TARGET_VECTOR_ELEN_64 && TARGET_MIN_VLEN >= 128, VNx2DI, 8, LMUL_RESERVED, 0, LMUL_RESERVED, 0, LMUL_1, 64)
TUPLE_ENTRY (VNx2x1DI, TARGET_VECTOR_ELEN_64 && TARGET_MIN_VLEN < 128, VNx1DI, 2, LMUL_RESERVED, 0, LMUL_1, 64, LMUL_RESERVED, 0)
TUPLE_ENTRY (VNx3x1DI, TARGET_VECTOR_ELEN_64 && TARGET_MIN_VLEN < 128, VNx1DI, 3, LMUL_RESERVED, 0, LMUL_1, 64, LMUL_RESERVED, 0)
TUPLE_ENTRY (VNx4x1DI, TARGET_VECTOR_ELEN_64 && TARGET_MIN_VLEN < 128, VNx1DI, 4, LMUL_RESERVED, 0, LMUL_1, 64, LMUL_RESERVED, 0)
TUPLE_ENTRY (VNx5x1DI, TARGET_VECTOR_ELEN_64 && TARGET_MIN_VLEN < 128, VNx1DI, 5, LMUL_RESERVED, 0, LMUL_1, 64, LMUL_RESERVED, 0)
TUPLE_ENTRY (VNx6x1DI, TARGET_VECTOR_ELEN_64 && TARGET_MIN_VLEN < 128, VNx1DI, 6, LMUL_RESERVED, 0, LMUL_1, 64, LMUL_RESERVED, 0)
TUPLE_ENTRY (VNx7x1DI, TARGET_VECTOR_ELEN_64 && TARGET_MIN_VLEN < 128, VNx1DI, 7, LMUL_RESERVED, 0, LMUL_1, 64, LMUL_RESERVED, 0)
TUPLE_ENTRY (VNx8x1DI, TARGET_VECTOR_ELEN_64 && TARGET_MIN_VLEN < 128, VNx1DI, 8, LMUL_RESERVED, 0, LMUL_1, 64, LMUL_RESERVED, 0)
TUPLE_ENTRY (VNx2x8DF, TARGET_VECTOR_ELEN_FP_64 && TARGET_MIN_VLEN >= 128, VNx8DF, 2, LMUL_RESERVED, 0, LMUL_RESERVED, 0, LMUL_4, 16)
TUPLE_ENTRY (VNx2x4DF, TARGET_VECTOR_ELEN_FP_64, VNx4DF, 2, LMUL_RESERVED, 0, LMUL_4, 16, LMUL_2, 32)
TUPLE_ENTRY (VNx3x4DF, TARGET_VECTOR_ELEN_FP_64 && TARGET_MIN_VLEN >= 128, VNx4DF, 3, LMUL_RESERVED, 0, LMUL_RESERVED, 0, LMUL_2, 32)
TUPLE_ENTRY (VNx4x4DF, TARGET_VECTOR_ELEN_FP_64 && TARGET_MIN_VLEN >= 128, VNx4DF, 4, LMUL_RESERVED, 0, LMUL_RESERVED, 0, LMUL_2, 32)
TUPLE_ENTRY (VNx2x2DF, TARGET_VECTOR_ELEN_FP_64, VNx2DF, 2, LMUL_RESERVED, 0, LMUL_2, 32, LMUL_1, 64)
TUPLE_ENTRY (VNx3x2DF, TARGET_VECTOR_ELEN_FP_64, VNx2DF, 3, LMUL_RESERVED, 0, LMUL_2, 32, LMUL_1, 64)
TUPLE_ENTRY (VNx4x2DF, TARGET_VECTOR_ELEN_FP_64, VNx2DF, 4, LMUL_RESERVED, 0, LMUL_2, 32, LMUL_1, 64)
TUPLE_ENTRY (VNx5x2DF, TARGET_VECTOR_ELEN_FP_64 && TARGET_MIN_VLEN >= 128, VNx2DF, 5, LMUL_RESERVED, 0, LMUL_RESERVED, 0, LMUL_1, 64)
TUPLE_ENTRY (VNx6x2DF, TARGET_VECTOR_ELEN_FP_64 && TARGET_MIN_VLEN >= 128, VNx2DF, 6, LMUL_RESERVED, 0, LMUL_RESERVED, 0, LMUL_1, 64)
TUPLE_ENTRY (VNx7x2DF, TARGET_VECTOR_ELEN_FP_64 && TARGET_MIN_VLEN >= 128, VNx2DF, 7, LMUL_RESERVED, 0, LMUL_RESERVED, 0, LMUL_1, 64)
TUPLE_ENTRY (VNx8x2DF, TARGET_VECTOR_ELEN_FP_64 && TARGET_MIN_VLEN >= 128, VNx2DF, 8, LMUL_RESERVED, 0, LMUL_RESERVED, 0, LMUL_1, 64)
TUPLE_ENTRY (VNx2x1DF, TARGET_VECTOR_ELEN_FP_64 && TARGET_MIN_VLEN < 128, VNx1DF, 2, LMUL_RESERVED, 0, LMUL_1, 64, LMUL_RESERVED, 0)
TUPLE_ENTRY (VNx3x1DF, TARGET_VECTOR_ELEN_FP_64 && TARGET_MIN_VLEN < 128, VNx1DF, 3, LMUL_RESERVED, 0, LMUL_1, 64, LMUL_RESERVED, 0)
TUPLE_ENTRY (VNx4x1DF, TARGET_VECTOR_ELEN_FP_64 && TARGET_MIN_VLEN < 128, VNx1DF, 4, LMUL_RESERVED, 0, LMUL_1, 64, LMUL_RESERVED, 0)
TUPLE_ENTRY (VNx5x1DF, TARGET_VECTOR_ELEN_FP_64 && TARGET_MIN_VLEN < 128, VNx1DF, 5, LMUL_RESERVED, 0, LMUL_1, 64, LMUL_RESERVED, 0)
TUPLE_ENTRY (VNx6x1DF, TARGET_VECTOR_ELEN_FP_64 && TARGET_MIN_VLEN < 128, VNx1DF, 6, LMUL_RESERVED, 0, LMUL_1, 64, LMUL_RESERVED, 0)
TUPLE_ENTRY (VNx7x1DF, TARGET_VECTOR_ELEN_FP_64 && TARGET_MIN_VLEN < 128, VNx1DF, 7, LMUL_RESERVED, 0, LMUL_1, 64, LMUL_RESERVED, 0)
TUPLE_ENTRY (VNx8x1DF, TARGET_VECTOR_ELEN_FP_64 && TARGET_MIN_VLEN < 128, VNx1DF, 8, LMUL_RESERVED, 0, LMUL_1, 64, LMUL_RESERVED, 0)
#undef ENTRY
#undef TUPLE_ENTRY
#undef ENTRY

View file

@ -892,9 +892,9 @@ change_insn (function_info *ssa, insn_change change, insn_info *insn,
return false;
/* Fix bug:
(insn 12 34 13 2 (set (reg:VNx8DI 120 v24 [orig:134 _1 ] [134])
(if_then_else:VNx8DI (unspec:VNx8BI [
(const_vector:VNx8BI repeat [
(insn 12 34 13 2 (set (reg:RVVM4DI 120 v24 [orig:134 _1 ] [134])
(if_then_else:RVVM4DI (unspec:RVVMF8BI [
(const_vector:RVVMF8BI repeat [
(const_int 1 [0x1])
])
(const_int 0 [0])
@ -903,13 +903,13 @@ change_insn (function_info *ssa, insn_change change, insn_info *insn,
(reg:SI 66 vl)
(reg:SI 67 vtype)
] UNSPEC_VPREDICATE)
(plus:VNx8DI (reg/v:VNx8DI 104 v8 [orig:137 op1 ] [137])
(sign_extend:VNx8DI (vec_duplicate:VNx8SI (reg:SI 15 a5
[140])))) (unspec:VNx8DI [ (const_int 0 [0]) ] UNSPEC_VUNDEF))) "rvv.c":8:12
(plus:RVVM4DI (reg/v:RVVM4DI 104 v8 [orig:137 op1 ] [137])
(sign_extend:RVVM4DI (vec_duplicate:RVVM4SI (reg:SI 15 a5
[140])))) (unspec:RVVM4DI [ (const_int 0 [0]) ] UNSPEC_VUNDEF))) "rvv.c":8:12
2784 {pred_single_widen_addsvnx8di_scalar} (expr_list:REG_EQUIV
(mem/c:VNx8DI (reg:DI 10 a0 [142]) [1 <retval>+0 S[64, 64] A128])
(expr_list:REG_EQUAL (if_then_else:VNx8DI (unspec:VNx8BI [
(const_vector:VNx8BI repeat [
(mem/c:RVVM4DI (reg:DI 10 a0 [142]) [1 <retval>+0 S[64, 64] A128])
(expr_list:REG_EQUAL (if_then_else:RVVM4DI (unspec:RVVMF8BI [
(const_vector:RVVMF8BI repeat [
(const_int 1 [0x1])
])
(reg/v:DI 13 a3 [orig:139 vl ] [139])
@ -918,11 +918,11 @@ change_insn (function_info *ssa, insn_change change, insn_info *insn,
(reg:SI 66 vl)
(reg:SI 67 vtype)
] UNSPEC_VPREDICATE)
(plus:VNx8DI (reg/v:VNx8DI 104 v8 [orig:137 op1 ] [137])
(const_vector:VNx8DI repeat [
(plus:RVVM4DI (reg/v:RVVM4DI 104 v8 [orig:137 op1 ] [137])
(const_vector:RVVM4DI repeat [
(const_int 2730 [0xaaa])
]))
(unspec:VNx8DI [
(unspec:RVVM4DI [
(const_int 0 [0])
] UNSPEC_VUNDEF))
(nil))))

View file

@ -972,8 +972,8 @@ riscv_valid_lo_sum_p (enum riscv_symbol_type sym_type, machine_mode mode,
}
/* Return true if mode is the RVV enabled mode.
For example: 'VNx1DI' mode is disabled if MIN_VLEN == 32.
'VNx1SI' mode is enabled if MIN_VLEN == 32. */
For example: 'RVVMF2SI' mode is disabled,
wheras 'RVVM1SI' mode is enabled if MIN_VLEN == 32. */
bool
riscv_v_ext_vector_mode_p (machine_mode mode)
@ -1023,11 +1023,36 @@ riscv_v_ext_mode_p (machine_mode mode)
poly_int64
riscv_v_adjust_nunits (machine_mode mode, int scale)
{
gcc_assert (GET_MODE_CLASS (mode) == MODE_VECTOR_BOOL);
if (riscv_v_ext_mode_p (mode))
return riscv_vector_chunks * scale;
{
if (TARGET_MIN_VLEN == 32)
scale = scale / 2;
return riscv_vector_chunks * scale;
}
return scale;
}
/* Call from ADJUST_NUNITS in riscv-modes.def. Return the correct
NUNITS size for corresponding machine_mode. */
poly_int64
riscv_v_adjust_nunits (machine_mode mode, bool fractional_p, int lmul, int nf)
{
if (riscv_v_ext_mode_p (mode))
{
scalar_mode smode = GET_MODE_INNER (mode);
int size = GET_MODE_SIZE (smode);
int nunits_per_chunk = riscv_bytes_per_vector_chunk / size;
if (fractional_p)
return nunits_per_chunk / lmul * riscv_vector_chunks * nf;
else
return nunits_per_chunk * lmul * riscv_vector_chunks * nf;
}
/* Set the disabled RVV modes size as 1 by default. */
return 1;
}
/* Call from ADJUST_BYTESIZE in riscv-modes.def. Return the correct
BYTE size for corresponding machine_mode. */
@ -1035,17 +1060,20 @@ poly_int64
riscv_v_adjust_bytesize (machine_mode mode, int scale)
{
if (riscv_v_ext_vector_mode_p (mode))
{
poly_uint16 mode_size = GET_MODE_SIZE (mode);
{
poly_int64 nunits = GET_MODE_NUNITS (mode);
poly_int64 mode_size = GET_MODE_SIZE (mode);
if (maybe_eq (mode_size, (uint16_t)-1))
mode_size = riscv_vector_chunks * scale;
if (maybe_eq (mode_size, (uint16_t) -1))
mode_size = riscv_vector_chunks * scale;
if (known_gt (mode_size, BYTES_PER_RISCV_VECTOR))
mode_size = BYTES_PER_RISCV_VECTOR;
return mode_size;
}
if (nunits.coeffs[0] > 8)
return exact_div (nunits, 8);
else if (nunits.is_constant ())
return 1;
else
return poly_int64 (1, 1);
}
return scale;
}
@ -1056,10 +1084,7 @@ riscv_v_adjust_bytesize (machine_mode mode, int scale)
poly_int64
riscv_v_adjust_precision (machine_mode mode, int scale)
{
if (riscv_v_ext_vector_mode_p (mode))
return riscv_vector_chunks * scale;
return scale;
return riscv_v_adjust_nunits (mode, scale);
}
/* Return true if X is a valid address for machine mode MODE. If it is,
@ -6482,25 +6507,8 @@ riscv_init_machine_status (void)
static poly_uint16
riscv_convert_vector_bits (void)
{
int chunk_num = 1;
if (TARGET_MIN_VLEN >= 128)
{
/* We have Full 'V' extension for application processors. It's specified
by -march=rv64gcv/rv32gcv, The 'V' extension depends upon the Zvl128b
and Zve64d extensions. Thus the number of bytes in a vector is 16 + 16
* x1 which is riscv_vector_chunks * 16 = poly_int (16, 16). */
riscv_bytes_per_vector_chunk = 16;
/* Adjust BYTES_PER_RISCV_VECTOR according to TARGET_MIN_VLEN:
- TARGET_MIN_VLEN = 128bit: [16,16]
- TARGET_MIN_VLEN = 256bit: [32,32]
- TARGET_MIN_VLEN = 512bit: [64,64]
- TARGET_MIN_VLEN = 1024bit: [128,128]
- TARGET_MIN_VLEN = 2048bit: [256,256]
- TARGET_MIN_VLEN = 4096bit: [512,512]
FIXME: We currently DON'T support TARGET_MIN_VLEN > 4096bit. */
chunk_num = TARGET_MIN_VLEN / 128;
}
else if (TARGET_MIN_VLEN > 32)
int chunk_num;
if (TARGET_MIN_VLEN > 32)
{
/* When targetting minimum VLEN > 32, we should use 64-bit chunk size.
Otherwise we can not include SEW = 64bits.
@ -6509,6 +6517,16 @@ riscv_convert_vector_bits (void)
Thus the number of bytes in a vector is 8 + 8 * x1 which is
riscv_vector_chunks * 8 = poly_int (8, 8). */
riscv_bytes_per_vector_chunk = 8;
/* Adjust BYTES_PER_RISCV_VECTOR according to TARGET_MIN_VLEN:
- TARGET_MIN_VLEN = 64bit: [8,8]
- TARGET_MIN_VLEN = 128bit: [16,16]
- TARGET_MIN_VLEN = 256bit: [32,32]
- TARGET_MIN_VLEN = 512bit: [64,64]
- TARGET_MIN_VLEN = 1024bit: [128,128]
- TARGET_MIN_VLEN = 2048bit: [256,256]
- TARGET_MIN_VLEN = 4096bit: [512,512]
FIXME: We currently DON'T support TARGET_MIN_VLEN > 4096bit. */
chunk_num = TARGET_MIN_VLEN / 64;
}
else
{
@ -6518,6 +6536,7 @@ riscv_convert_vector_bits (void)
Thus the number of bytes in a vector is 4 + 4 * x1 which is
riscv_vector_chunks * 4 = poly_int (4, 4). */
riscv_bytes_per_vector_chunk = 4;
chunk_num = 1;
}
/* Set riscv_vector_chunks as poly (1, 1) run-time constant if TARGET_VECTOR

View file

@ -1040,6 +1040,7 @@ extern unsigned riscv_stack_boundary;
extern unsigned riscv_bytes_per_vector_chunk;
extern poly_uint16 riscv_vector_chunks;
extern poly_int64 riscv_v_adjust_nunits (enum machine_mode, int);
extern poly_int64 riscv_v_adjust_nunits (machine_mode, bool, int, int);
extern poly_int64 riscv_v_adjust_precision (enum machine_mode, int);
extern poly_int64 riscv_v_adjust_bytesize (enum machine_mode, int);
/* The number of bits and bytes in a RVV vector. */

View file

@ -172,44 +172,51 @@
;; Main data type used by the insn
(define_attr "mode" "unknown,none,QI,HI,SI,DI,TI,HF,SF,DF,TF,
VNx1BI,VNx2BI,VNx4BI,VNx8BI,VNx16BI,VNx32BI,VNx64BI,VNx128BI,
VNx1QI,VNx2QI,VNx4QI,VNx8QI,VNx16QI,VNx32QI,VNx64QI,VNx128QI,
VNx1HI,VNx2HI,VNx4HI,VNx8HI,VNx16HI,VNx32HI,VNx64HI,
VNx1SI,VNx2SI,VNx4SI,VNx8SI,VNx16SI,VNx32SI,
VNx1DI,VNx2DI,VNx4DI,VNx8DI,VNx16DI,
VNx1HF,VNx2HF,VNx4HF,VNx8HF,VNx16HF,VNx32HF,VNx64HF,
VNx1SF,VNx2SF,VNx4SF,VNx8SF,VNx16SF,VNx32SF,
VNx1DF,VNx2DF,VNx4DF,VNx8DF,VNx16DF,
VNx2x64QI,VNx2x32QI,VNx3x32QI,VNx4x32QI,
VNx2x16QI,VNx3x16QI,VNx4x16QI,VNx5x16QI,VNx6x16QI,VNx7x16QI,VNx8x16QI,
VNx2x8QI,VNx3x8QI,VNx4x8QI,VNx5x8QI,VNx6x8QI,VNx7x8QI,VNx8x8QI,
VNx2x4QI,VNx3x4QI,VNx4x4QI,VNx5x4QI,VNx6x4QI,VNx7x4QI,VNx8x4QI,
VNx2x2QI,VNx3x2QI,VNx4x2QI,VNx5x2QI,VNx6x2QI,VNx7x2QI,VNx8x2QI,
VNx2x1QI,VNx3x1QI,VNx4x1QI,VNx5x1QI,VNx6x1QI,VNx7x1QI,VNx8x1QI,
VNx2x32HI,VNx2x16HI,VNx3x16HI,VNx4x16HI,
VNx2x8HI,VNx3x8HI,VNx4x8HI,VNx5x8HI,VNx6x8HI,VNx7x8HI,VNx8x8HI,
VNx2x4HI,VNx3x4HI,VNx4x4HI,VNx5x4HI,VNx6x4HI,VNx7x4HI,VNx8x4HI,
VNx2x2HI,VNx3x2HI,VNx4x2HI,VNx5x2HI,VNx6x2HI,VNx7x2HI,VNx8x2HI,
VNx2x1HI,VNx3x1HI,VNx4x1HI,VNx5x1HI,VNx6x1HI,VNx7x1HI,VNx8x1HI,
VNx2x32HF,VNx2x16HF,VNx3x16HF,VNx4x16HF,
VNx2x8HF,VNx3x8HF,VNx4x8HF,VNx5x8HF,VNx6x8HF,VNx7x8HF,VNx8x8HF,
VNx2x4HF,VNx3x4HF,VNx4x4HF,VNx5x4HF,VNx6x4HF,VNx7x4HF,VNx8x4HF,
VNx2x2HF,VNx3x2HF,VNx4x2HF,VNx5x2HF,VNx6x2HF,VNx7x2HF,VNx8x2HF,
VNx2x1HF,VNx3x1HF,VNx4x1HF,VNx5x1HF,VNx6x1HF,VNx7x1HF,VNx8x1HF,
VNx2x16SI,VNx2x8SI,VNx3x8SI,VNx4x8SI,
VNx2x4SI,VNx3x4SI,VNx4x4SI,VNx5x4SI,VNx6x4SI,VNx7x4SI,VNx8x4SI,
VNx2x2SI,VNx3x2SI,VNx4x2SI,VNx5x2SI,VNx6x2SI,VNx7x2SI,VNx8x2SI,
VNx2x1SI,VNx3x1SI,VNx4x1SI,VNx5x1SI,VNx6x1SI,VNx7x1SI,VNx8x1SI,
VNx2x16SF,VNx2x8SF,VNx3x8SF,VNx4x8SF,
VNx2x4SF,VNx3x4SF,VNx4x4SF,VNx5x4SF,VNx6x4SF,VNx7x4SF,VNx8x4SF,
VNx2x2SF,VNx3x2SF,VNx4x2SF,VNx5x2SF,VNx6x2SF,VNx7x2SF,VNx8x2SF,
VNx2x1SF,VNx3x1SF,VNx4x1SF,VNx5x1SF,VNx6x1SF,VNx7x1SF,VNx8x1SF,
VNx2x8DI,VNx2x4DI,VNx3x4DI,VNx4x4DI,
VNx2x2DI,VNx3x2DI,VNx4x2DI,VNx5x2DI,VNx6x2DI,VNx7x2DI,VNx8x2DI,
VNx2x1DI,VNx3x1DI,VNx4x1DI,VNx5x1DI,VNx6x1DI,VNx7x1DI,VNx8x1DI,
VNx2x8DF,VNx2x4DF,VNx3x4DF,VNx4x4DF,
VNx2x2DF,VNx3x2DF,VNx4x2DF,VNx5x2DF,VNx6x2DF,VNx7x2DF,VNx8x2DF,
VNx2x1DF,VNx3x1DF,VNx4x1DF,VNx5x1DF,VNx6x1DF,VNx7x1DF,VNx8x1DF"
RVVMF64BI,RVVMF32BI,RVVMF16BI,RVVMF8BI,RVVMF4BI,RVVMF2BI,RVVM1BI,
RVVM8QI,RVVM4QI,RVVM2QI,RVVM1QI,RVVMF2QI,RVVMF4QI,RVVMF8QI,
RVVM8HI,RVVM4HI,RVVM2HI,RVVM1HI,RVVMF2HI,RVVMF4HI,
RVVM8HF,RVVM4HF,RVVM2HF,RVVM1HF,RVVMF2HF,RVVMF4HF,
RVVM8SI,RVVM4SI,RVVM2SI,RVVM1SI,RVVMF2SI,
RVVM8SF,RVVM4SF,RVVM2SF,RVVM1SF,RVVMF2SF,
RVVM8DI,RVVM4DI,RVVM2DI,RVVM1DI,
RVVM8DF,RVVM4DF,RVVM2DF,RVVM1DF,
RVVM1x8QI,RVVMF2x8QI,RVVMF4x8QI,RVVMF8x8QI,
RVVM1x7QI,RVVMF2x7QI,RVVMF4x7QI,RVVMF8x7QI,
RVVM1x6QI,RVVMF2x6QI,RVVMF4x6QI,RVVMF8x6QI,
RVVM1x5QI,RVVMF2x5QI,RVVMF4x5QI,RVVMF8x5QI,
RVVM2x4QI,RVVM1x4QI,RVVMF2x4QI,RVVMF4x4QI,RVVMF8x4QI,
RVVM2x3QI,RVVM1x3QI,RVVMF2x3QI,RVVMF4x3QI,RVVMF8x3QI,
RVVM4x2QI,RVVM2x2QI,RVVM1x2QI,RVVMF2x2QI,RVVMF4x2QI,RVVMF8x2QI,
RVVM1x8HI,RVVMF2x8HI,RVVMF4x8HI,
RVVM1x7HI,RVVMF2x7HI,RVVMF4x7HI,
RVVM1x6HI,RVVMF2x6HI,RVVMF4x6HI,
RVVM1x5HI,RVVMF2x5HI,RVVMF4x5HI,
RVVM2x4HI,RVVM1x4HI,RVVMF2x4HI,RVVMF4x4HI,
RVVM2x3HI,RVVM1x3HI,RVVMF2x3HI,RVVMF4x3HI,
RVVM4x2HI,RVVM2x2HI,RVVM1x2HI,RVVMF2x2HI,RVVMF4x2HI,
RVVM1x8HF,RVVMF2x8HF,RVVMF4x8HF,RVVM1x7HF,RVVMF2x7HF,
RVVMF4x7HF,RVVM1x6HF,RVVMF2x6HF,RVVMF4x6HF,RVVM1x5HF,
RVVMF2x5HF,RVVMF4x5HF,RVVM2x4HF,RVVM1x4HF,RVVMF2x4HF,
RVVMF4x4HF,RVVM2x3HF,RVVM1x3HF,RVVMF2x3HF,RVVMF4x3HF,
RVVM4x2HF,RVVM2x2HF,RVVM1x2HF,RVVMF2x2HF,RVVMF4x2HF,
RVVM1x8SI,RVVMF2x8SI,
RVVM1x7SI,RVVMF2x7SI,
RVVM1x6SI,RVVMF2x6SI,
RVVM1x5SI,RVVMF2x5SI,
RVVM2x4SI,RVVM1x4SI,RVVMF2x4SI,
RVVM2x3SI,RVVM1x3SI,RVVMF2x3SI,
RVVM4x2SI,RVVM2x2SI,RVVM1x2SI,RVVMF2x2SI,
RVVM1x8SF,RVVMF2x8SF,RVVM1x7SF,RVVMF2x7SF,
RVVM1x6SF,RVVMF2x6SF,RVVM1x5SF,RVVMF2x5SF,
RVVM2x4SF,RVVM1x4SF,RVVMF2x4SF,RVVM2x3SF,
RVVM1x3SF,RVVMF2x3SF,RVVM4x2SF,RVVM2x2SF,
RVVM1x2SF,RVVMF2x2SF,
RVVM1x8DI,RVVM1x7DI,RVVM1x6DI,RVVM1x5DI,
RVVM2x4DI,RVVM1x4DI,RVVM2x3DI,RVVM1x3DI,
RVVM4x2DI,RVVM2x2DI,RVVM1x2DI,RVVM1x8DF,
RVVM1x7DF,RVVM1x6DF,RVVM1x5DF,RVVM2x4DF,
RVVM1x4DF,RVVM2x3DF,RVVM1x3DF,RVVM4x2DF,
RVVM2x2DF,RVVM1x2DF"
(const_string "unknown"))
;; True if the main data type is twice the size of a word.
@ -447,13 +454,13 @@
vfncvtitof,vfwcvtftoi,vfcvtftoi,vfcvtitof,
vfredo,vfredu,vfwredo,vfwredu,
vfslide1up,vfslide1down")
(and (eq_attr "mode" "VNx1HF,VNx2HF,VNx4HF,VNx8HF,VNx16HF,VNx32HF,VNx64HF")
(and (eq_attr "mode" "RVVM8HF,RVVM4HF,RVVM2HF,RVVM1HF,RVVMF2HF,RVVMF4HF")
(match_test "!TARGET_ZVFH")))
(const_string "yes")
;; The mode records as QI for the FP16 <=> INT8 instruction.
(and (eq_attr "type" "vfncvtftoi,vfwcvtitof")
(and (eq_attr "mode" "VNx1QI,VNx2QI,VNx4QI,VNx8QI,VNx16QI,VNx32QI,VNx64QI")
(and (eq_attr "mode" "RVVM4QI,RVVM2QI,RVVM1QI,RVVMF2QI,RVVMF4QI,RVVMF8QI")
(match_test "!TARGET_ZVFH")))
(const_string "yes")
]

File diff suppressed because it is too large Load diff

File diff suppressed because it is too large Load diff

View file

@ -1,5 +1,9 @@
/* { dg-do run { target { riscv_vector } } } */
/* For some reason we exceed
the default code model's +-2 GiB limits. We should investigate why and
add a proper description here. For now just make sure the test case
compiles properly. */
/* { dg-additional-options "-mcmodel=medany" } */
#include "gather_load-7.c"
#include <assert.h>

View file

@ -1,5 +1,9 @@
/* { dg-do run { target { riscv_vector } } } */
/* For some reason we exceed
the default code model's +-2 GiB limits. We should investigate why and
add a proper description here. For now just make sure the test case
compiles properly. */
/* { dg-additional-options "-mcmodel=medany" } */
#include "gather_load-8.c"
#include <assert.h>

View file

@ -1,5 +1,10 @@
/* { dg-do compile } */
/* { dg-additional-options "-march=rv32gcv_zvfh -mabi=ilp32d -fdump-tree-vect-details" } */
/* For some reason we exceed
the default code model's +-2 GiB limits. We should investigate why and
add a proper description here. For now just make sure the test case
compiles properly. */
/* { dg-additional-options "-mcmodel=medany" } */
#include <stdint-gcc.h>

View file

@ -1,5 +1,9 @@
/* { dg-do run { target { riscv_vector } } } */
/* For some reason we exceed
the default code model's +-2 GiB limits. We should investigate why and
add a proper description here. For now just make sure the test case
compiles properly. */
/* { dg-additional-options "-mcmodel=medany" } */
#include "mask_scatter_store-8.c"
#include <assert.h>

View file

@ -1,5 +1,9 @@
/* { dg-do run { target { riscv_vector } } } */
/* For some reason we exceed
the default code model's +-2 GiB limits. We should investigate why and
add a proper description here. For now just make sure the test case
compiles properly. */
/* { dg-additional-options "-mcmodel=medany" } */
#include "scatter_store-8.c"
#include <assert.h>