aarch64: Add mf8 data movement intrinsics

This patch adds mf8 variants of what I'll loosely call the existing
"data movement" intrinsics, including the recent FEAT_LUT ones.
I think this completes the FP8 intrinsic definitions.

The new intrinsics are defined entirely in the compiler.  This should
make it easy to move the existing non-mf8 variants into the compiler
as well, but that's too invasive for stage 3 and so is left to GCC 16.

I wondered about trying to reduce the cut-&-paste in the .def file,
but in the end decided against it.  I have a plan for specifying this
information in a different format, but again that would need to wait
until GCC 16.

The patch includes some support for gimple folding.  I initially
tested the patch without it, so that all the rtl expansion code
was exercised.

vlut.c fails for all types with big-endian ILP32, but that's
for a later patch.

gcc/
	* config/aarch64/aarch64.md (UNSPEC_BSL, UNSPEC_COMBINE, UNSPEC_DUP)
	(UNSPEC_DUP_LANE, UNSPEC_GET_LANE, UNSPEC_LD1_DUP, UNSPEC_LD1x2)
	(UNSPEC_LD1x3, UNSPEC_LD1x4, UNSPEC_SET_LANE, UNSPEC_ST1_LANE)
	(USNEPC_ST1x2, UNSPEC_ST1x3, UNSPEC_ST1x4, UNSPEC_VCREATE)
	(UNSPEC_VEC_COPY): New unspecs.
	* config/aarch64/iterators.md (UNSPEC_TBL): Likewise.
	* config/aarch64/aarch64-simd-pragma-builtins.def: Add definitions
	of the mf8 data movement intrinsics.
	* config/aarch64/aarch64-protos.h
	(aarch64_advsimd_vector_array_mode): Declare.
	* config/aarch64/aarch64.cc
	(aarch64_advsimd_vector_array_mode): Make public.
	* config/aarch64/aarch64-builtins.h (qualifier_const_pointer): New
	aarch64_type_qualifiers member.
	* config/aarch64/aarch64-builtins.cc (AARCH64_SIMD_VGET_LOW_BUILTINS)
	(AARCH64_SIMD_VGET_HIGH_BUILTINS): Add mf8 variants.
	(aarch64_int_or_fp_type): Handle qualifier_modal_float.
	(aarch64_num_lanes): New function.
	(binary_two_lanes, load, load_lane, store, store_lane): New signatures.
	(unary_lane): Likewise.
	(simd_type::nunits): New member function.
	(simd_types): Add pointer types.
	(aarch64_fntype): Handle the new signatures.
	(require_immediate_lane_index): Use aarch64_num_lanes.
	(aarch64_pragma_builtins_checker::check): Handle the new intrinsics.
	(aarch64_convert_address): (aarch64_dereference_pointer):
	(aarch64_canonicalize_lane, aarch64_convert_to_lane_mask)
	(aarch64_pack_into_v128s, aarch64_expand_permute_pair)
	(aarch64_expand_tbl_tbx): New functions.
	(aarch64_expand_pragma_builtin): Handle the new intrinsics.
	(aarch64_force_gimple_val, aarch64_copy_vops, aarch64_fold_to_val)
	(aarch64_dereference, aarch64_get_lane_bit_index, aarch64_get_lane)
	(aarch64_set_lane, aarch64_fold_combine, aarch64_fold_load)
	(aarch64_fold_store, aarch64_ext_index, aarch64_rev_index)
	(aarch64_trn_index, aarch64_uzp_index, aarch64_zip_index)
	(aarch64_fold_permute): New functions, some split out from
	aarch64_general_gimple_fold_builtin.
	(aarch64_gimple_fold_pragma_builtin): New function.
	(aarch64_general_gimple_fold_builtin): Use the new functions above.
	* config/aarch64/aarch64-simd.md (aarch64_dup_lane<mode>)
	(aarch64_dup_lane_<vswap_width_name><mode>): Add "@" to name.
	(aarch64_simd_vec_set<mode>): Likewise.
	(*aarch64_simd_vec_copy_lane_<vswap_width_name><mode>): Likewise.
	(aarch64_simd_bsl<mode>): Likewise.
	(aarch64_combine<mode>): Likewise.
	(aarch64_cm<optab><mode><vczle><vczbe>): Likewise.
	(aarch64_simd_ld2r<vstruct_elt>): Likewise.
	(aarch64_vec_load_lanes<mode>_lane<vstruct_elt>): Likewise.
	(aarch64_simd_ld3r<vstruct_elt>): Likewise.
	(aarch64_simd_ld4r<vstruct_elt>): Likewise.
	(aarch64_ld1x3<vstruct_elt>): Likewise.
	(aarch64_ld1x4<vstruct_elt>): Likewise.
	(aarch64_st1x2<vstruct_elt>): Likewise.
	(aarch64_st1x3<vstruct_elt>): Likewise.
	(aarch64_st1x4<vstruct_elt>): Likewise.
	(aarch64_ld<nregs><vstruct_elt>): Likewise.
	(aarch64_ld1<VALL_F16: Likewise.mode>): Likewise.
	(aarch64_ld1x2<vstruct_elt>): Likewise.
	(aarch64_ld<nregs>_lane<vstruct_elt>): Likewise.
	(aarch64_<PERMUTE: Likewise.perm_insn><mode><vczle><vczbe>): Likewise.
	(aarch64_ext<mode>): Likewise.
	(aarch64_rev<REVERSE: Likewise.rev_op><mode><vczle><vczbe>): Likewise.
	(aarch64_st<nregs><vstruct_elt>): Likewise.
	(aarch64_st<nregs>_lane<vstruct_elt>): Likewise.
	(aarch64_st1<VALL_F16: Likewise.mode>): Likewise.

gcc/testsuite/
	* gcc.target/aarch64/advsimd-intrinsics/arm-neon-ref.h: Add mfloat8
	support.
	* gcc.target/aarch64/advsimd-intrinsics/compute-ref-data.h: Likewise.
	* gcc.target/aarch64/advsimd-intrinsics/vbsl.c: Likewise.
	* gcc.target/aarch64/advsimd-intrinsics/vcombine.c: Likewise.
	* gcc.target/aarch64/advsimd-intrinsics/vcreate.c: Likewise.
	* gcc.target/aarch64/advsimd-intrinsics/vdup-vmov.c: Likewise.
	* gcc.target/aarch64/advsimd-intrinsics/vdup_lane.c: Likewise.
	* gcc.target/aarch64/advsimd-intrinsics/vext.c: Likewise.
	* gcc.target/aarch64/advsimd-intrinsics/vget_high.c: Likewise.
	* gcc.target/aarch64/advsimd-intrinsics/vld1.c: Likewise.
	* gcc.target/aarch64/advsimd-intrinsics/vld1_dup.c: Likewise.
	* gcc.target/aarch64/advsimd-intrinsics/vld1_lane.c: Likewise.
	* gcc.target/aarch64/advsimd-intrinsics/vld1x2.c: Likewise.
	* gcc.target/aarch64/advsimd-intrinsics/vld1x3.c: Likewise.
	* gcc.target/aarch64/advsimd-intrinsics/vld1x4.c: Likewise.
	* gcc.target/aarch64/advsimd-intrinsics/vldX.c: Likewise.
	* gcc.target/aarch64/advsimd-intrinsics/vldX_dup.c: Likewise.
	* gcc.target/aarch64/advsimd-intrinsics/vldX_lane.c: Likewise.
	* gcc.target/aarch64/advsimd-intrinsics/vrev.c: Likewise.
	* gcc.target/aarch64/advsimd-intrinsics/vset_lane.c: Likewise.
	* gcc.target/aarch64/advsimd-intrinsics/vshuffle.inc: Likewise.
	* gcc.target/aarch64/advsimd-intrinsics/vst1_lane.c: Likewise.
	* gcc.target/aarch64/advsimd-intrinsics/vst1x2.c: Likewise.
	* gcc.target/aarch64/advsimd-intrinsics/vst1x3.c: Likewise.
	* gcc.target/aarch64/advsimd-intrinsics/vst1x4.c: Likewise.
	* gcc.target/aarch64/advsimd-intrinsics/vstX_lane.c: Likewise.
	* gcc.target/aarch64/advsimd-intrinsics/vtbX.c: Likewise.
	* gcc.target/aarch64/advsimd-intrinsics/vtrn.c: Likewise.
	* gcc.target/aarch64/advsimd-intrinsics/vtrn_half.c: Likewise.
	* gcc.target/aarch64/advsimd-intrinsics/vuzp.c: Likewise.
	* gcc.target/aarch64/advsimd-intrinsics/vuzp_half.c: Likewise.
	* gcc.target/aarch64/advsimd-intrinsics/vzip.c: Likewise.
	* gcc.target/aarch64/advsimd-intrinsics/vzip_half.c: Likewise.
	* gcc.target/aarch64/simd/lut.c: Likewise.
	* gcc.target/aarch64/vdup_lane_1.c: Likewise.
	* gcc.target/aarch64/vdup_lane_2.c: Likewise.
	* gcc.target/aarch64/vdup_n_1.c: Likewise.
	* gcc.target/aarch64/vect_copy_lane_1.c: Likewise.
	* gcc.target/aarch64/simd/mf8_data_1.c: New test.
	* gcc.target/aarch64/simd/mf8_data_2.c: Likewise.

Co-authored-by: Saurabh Jha <saurabh.jha@arm.com>
This commit is contained in:
Richard Sandiford 2024-12-30 12:50:56 +00:00
parent 5f40ff8efd
commit ea66f57c96
48 changed files with 4216 additions and 165 deletions

File diff suppressed because it is too large Load diff

View file

@ -28,6 +28,8 @@ enum aarch64_type_qualifiers
qualifier_const = 0x2, /* 1 << 1 */
/* T *foo. */
qualifier_pointer = 0x4, /* 1 << 2 */
/* const T *foo. */
qualifier_const_pointer = 0x6,
/* Used when expanding arguments if an operand could
be an immediate. */
qualifier_immediate = 0x8, /* 1 << 3 */

View file

@ -896,6 +896,8 @@ bool aarch64_move_imm (unsigned HOST_WIDE_INT, machine_mode);
machine_mode aarch64_sve_int_mode (machine_mode);
opt_machine_mode aarch64_sve_pred_mode (unsigned int);
machine_mode aarch64_sve_pred_mode (machine_mode);
opt_machine_mode aarch64_advsimd_vector_array_mode (machine_mode,
unsigned HOST_WIDE_INT);
opt_machine_mode aarch64_sve_data_mode (scalar_mode, poly_uint64);
bool aarch64_sve_mode_p (machine_mode);
HOST_WIDE_INT aarch64_fold_sve_cnt_pat (aarch64_svpattern, unsigned int);

View file

@ -26,6 +26,26 @@
#define ENTRY_BINARY_LANE(N, T0, T1, T2, U, F) \
ENTRY (N, binary_lane, T0, T1, T2, none, U, F)
#undef ENTRY_BINARY_TWO_LANES
#define ENTRY_BINARY_TWO_LANES(N, T0, T1, T2, U, F) \
ENTRY (N, binary_two_lanes, T0, T1, T2, none, U, F)
#undef ENTRY_LOAD
#define ENTRY_LOAD(N, T0, T1, U) \
ENTRY (N, load, T0, T1, none, none, U, LOAD)
#undef ENTRY_LOAD_LANE
#define ENTRY_LOAD_LANE(N, T0, T1, T2, U) \
ENTRY (N, load_lane, T0, T1, T2, none, U, LOAD)
#undef ENTRY_STORE
#define ENTRY_STORE(N, T0, T1, U) \
ENTRY (N, store, T0, T1, none, none, U, STORE)
#undef ENTRY_STORE_LANE
#define ENTRY_STORE_LANE(N, T0, T1, U) \
ENTRY (N, store_lane, T0, T1, none, none, U, STORE)
#undef ENTRY_TERNARY
#define ENTRY_TERNARY(N, T0, T1, T2, T3, U, F) \
ENTRY (N, ternary, T0, T1, T2, T3, U, F)
@ -38,6 +58,10 @@
#define ENTRY_UNARY(N, T0, T1, U, F) \
ENTRY (N, unary, T0, T1, none, none, U, F)
#undef ENTRY_UNARY_LANE
#define ENTRY_UNARY_LANE(N, T0, T1, U, F) \
ENTRY (N, unary_lane, T0, T1, none, none, U, F)
#undef ENTRY_BINARY_VHSDF
#define ENTRY_BINARY_VHSDF(NAME, UNSPEC, FLAGS) \
ENTRY_BINARY (NAME##_f16, f16, f16, f16, UNSPEC, FLAGS) \
@ -121,6 +145,7 @@ ENTRY_BINARY_VHSDF (vamin, UNSPEC_FAMIN, FP)
ENTRY_TERNARY_VLUT8 (p)
ENTRY_TERNARY_VLUT8 (s)
ENTRY_TERNARY_VLUT8 (u)
ENTRY_TERNARY_VLUT8 (mf)
ENTRY_TERNARY_VLUT16 (bf)
ENTRY_TERNARY_VLUT16 (f)
@ -170,3 +195,224 @@ ENTRY_FMA_FPM (vmlallbt, f32, UNSPEC_FMLALLBT_FP8)
ENTRY_FMA_FPM (vmlalltb, f32, UNSPEC_FMLALLTB_FP8)
ENTRY_FMA_FPM (vmlalltt, f32, UNSPEC_FMLALLTT_FP8)
#undef REQUIRED_EXTENSIONS
// bsl
#define REQUIRED_EXTENSIONS nonstreaming_only (TARGET_SIMD)
ENTRY_TERNARY (vbsl_mf8, mf8, u8, mf8, mf8, UNSPEC_BSL, QUIET)
ENTRY_TERNARY (vbslq_mf8, mf8q, u8q, mf8q, mf8q, UNSPEC_BSL, QUIET)
#undef REQUIRED_EXTENSIONS
// combine
#define REQUIRED_EXTENSIONS nonstreaming_only (NONE)
ENTRY_BINARY (vcombine_mf8, mf8q, mf8, mf8, UNSPEC_COMBINE, QUIET)
#undef REQUIRED_EXTENSIONS
// copy_lane
#define REQUIRED_EXTENSIONS nonstreaming_only (TARGET_SIMD)
ENTRY_BINARY_TWO_LANES (vcopy_lane_mf8, mf8, mf8, mf8,
UNSPEC_VEC_COPY, QUIET)
ENTRY_BINARY_TWO_LANES (vcopyq_lane_mf8, mf8q, mf8q, mf8,
UNSPEC_VEC_COPY, QUIET)
ENTRY_BINARY_TWO_LANES (vcopy_laneq_mf8, mf8, mf8, mf8q,
UNSPEC_VEC_COPY, QUIET)
ENTRY_BINARY_TWO_LANES (vcopyq_laneq_mf8, mf8q, mf8q, mf8q,
UNSPEC_VEC_COPY, QUIET)
#undef REQUIRED_EXTENSIONS
// create
#define REQUIRED_EXTENSIONS nonstreaming_only (TARGET_SIMD)
ENTRY_UNARY (vcreate_mf8, mf8, u64_scalar, UNSPEC_VCREATE, QUIET)
#undef REQUIRED_EXTENSIONS
// dup
#define REQUIRED_EXTENSIONS nonstreaming_only (TARGET_SIMD)
ENTRY_UNARY (vdup_n_mf8, mf8, mf8_scalar, UNSPEC_DUP, QUIET)
ENTRY_UNARY (vdupq_n_mf8, mf8q, mf8_scalar, UNSPEC_DUP, QUIET)
ENTRY_UNARY_LANE (vdup_lane_mf8, mf8, mf8, UNSPEC_DUP_LANE, QUIET)
ENTRY_UNARY_LANE (vdupq_lane_mf8, mf8q, mf8, UNSPEC_DUP_LANE, QUIET)
ENTRY_UNARY_LANE (vdup_laneq_mf8, mf8, mf8q, UNSPEC_DUP_LANE, QUIET)
ENTRY_UNARY_LANE (vdupq_laneq_mf8, mf8q, mf8q, UNSPEC_DUP_LANE, QUIET)
#undef REQUIRED_EXTENSIONS
// dupb_lane
#define REQUIRED_EXTENSIONS nonstreaming_only (TARGET_SIMD)
ENTRY_UNARY_LANE (vdupb_lane_mf8, mf8_scalar, mf8, UNSPEC_GET_LANE, QUIET)
ENTRY_UNARY_LANE (vdupb_laneq_mf8, mf8_scalar, mf8q, UNSPEC_GET_LANE, QUIET)
#undef REQUIRED_EXTENSIONS
// ext
#define REQUIRED_EXTENSIONS nonstreaming_only (TARGET_SIMD)
ENTRY_BINARY_LANE (vext_mf8, mf8, mf8, mf8, UNSPEC_EXT, QUIET)
ENTRY_BINARY_LANE (vextq_mf8, mf8q, mf8q, mf8q, UNSPEC_EXT, QUIET)
#undef REQUIRED_EXTENSIONS
// ld1
#define REQUIRED_EXTENSIONS nonstreaming_only (TARGET_SIMD)
ENTRY_LOAD (vld1_mf8, mf8, mf8_scalar_const_ptr, UNSPEC_LD1)
ENTRY_LOAD (vld1q_mf8, mf8q, mf8_scalar_const_ptr, UNSPEC_LD1)
ENTRY_LOAD (vld1_dup_mf8, mf8, mf8_scalar_const_ptr, UNSPEC_DUP)
ENTRY_LOAD (vld1q_dup_mf8, mf8q, mf8_scalar_const_ptr, UNSPEC_DUP)
ENTRY_LOAD_LANE (vld1_lane_mf8, mf8, mf8_scalar_const_ptr, mf8,
UNSPEC_SET_LANE)
ENTRY_LOAD_LANE (vld1q_lane_mf8, mf8q, mf8_scalar_const_ptr, mf8q,
UNSPEC_SET_LANE)
#undef REQUIRED_EXTENSIONS
// ld<n>
#define REQUIRED_EXTENSIONS nonstreaming_only (TARGET_SIMD)
ENTRY_LOAD (vld1_mf8_x2, mf8x2, mf8_scalar_const_ptr, UNSPEC_LD1x2)
ENTRY_LOAD (vld1q_mf8_x2, mf8qx2, mf8_scalar_const_ptr, UNSPEC_LD1x2)
ENTRY_LOAD (vld2_mf8, mf8x2, mf8_scalar_const_ptr, UNSPEC_LD2)
ENTRY_LOAD (vld2q_mf8, mf8qx2, mf8_scalar_const_ptr, UNSPEC_LD2)
ENTRY_LOAD (vld2_dup_mf8, mf8x2, mf8_scalar_const_ptr, UNSPEC_LD2_DUP)
ENTRY_LOAD (vld2q_dup_mf8, mf8qx2, mf8_scalar_const_ptr, UNSPEC_LD2_DUP)
ENTRY_LOAD_LANE (vld2_lane_mf8, mf8x2, mf8_scalar_const_ptr, mf8x2,
UNSPEC_LD2_LANE)
ENTRY_LOAD_LANE (vld2q_lane_mf8, mf8qx2, mf8_scalar_const_ptr, mf8qx2,
UNSPEC_LD2_LANE)
ENTRY_LOAD (vld1_mf8_x3, mf8x3, mf8_scalar_const_ptr, UNSPEC_LD1x3)
ENTRY_LOAD (vld1q_mf8_x3, mf8qx3, mf8_scalar_const_ptr, UNSPEC_LD1x3)
ENTRY_LOAD (vld3_mf8, mf8x3, mf8_scalar_const_ptr, UNSPEC_LD3)
ENTRY_LOAD (vld3q_mf8, mf8qx3, mf8_scalar_const_ptr, UNSPEC_LD3)
ENTRY_LOAD (vld3_dup_mf8, mf8x3, mf8_scalar_const_ptr, UNSPEC_LD3_DUP)
ENTRY_LOAD (vld3q_dup_mf8, mf8qx3, mf8_scalar_const_ptr, UNSPEC_LD3_DUP)
ENTRY_LOAD_LANE (vld3_lane_mf8, mf8x3, mf8_scalar_const_ptr, mf8x3,
UNSPEC_LD3_LANE)
ENTRY_LOAD_LANE (vld3q_lane_mf8, mf8qx3, mf8_scalar_const_ptr, mf8qx3,
UNSPEC_LD3_LANE)
ENTRY_LOAD (vld1_mf8_x4, mf8x4, mf8_scalar_const_ptr, UNSPEC_LD1x4)
ENTRY_LOAD (vld1q_mf8_x4, mf8qx4, mf8_scalar_const_ptr, UNSPEC_LD1x4)
ENTRY_LOAD (vld4_mf8, mf8x4, mf8_scalar_const_ptr, UNSPEC_LD4)
ENTRY_LOAD (vld4q_mf8, mf8qx4, mf8_scalar_const_ptr, UNSPEC_LD4)
ENTRY_LOAD (vld4_dup_mf8, mf8x4, mf8_scalar_const_ptr, UNSPEC_LD4_DUP)
ENTRY_LOAD (vld4q_dup_mf8, mf8qx4, mf8_scalar_const_ptr, UNSPEC_LD4_DUP)
ENTRY_LOAD_LANE (vld4_lane_mf8, mf8x4, mf8_scalar_const_ptr, mf8x4,
UNSPEC_LD4_LANE)
ENTRY_LOAD_LANE (vld4q_lane_mf8, mf8qx4, mf8_scalar_const_ptr, mf8qx4,
UNSPEC_LD4_LANE)
#undef REQUIRED_EXTENSIONS
// mov
#define REQUIRED_EXTENSIONS nonstreaming_only (TARGET_SIMD)
ENTRY_UNARY (vmov_n_mf8, mf8, mf8_scalar, UNSPEC_DUP, QUIET)
ENTRY_UNARY (vmovq_n_mf8, mf8q, mf8_scalar, UNSPEC_DUP, QUIET)
#undef REQUIRED_EXTENSIONS
// rev
#define REQUIRED_EXTENSIONS nonstreaming_only (TARGET_SIMD)
ENTRY_UNARY (vrev64_mf8, mf8, mf8, UNSPEC_REV64, QUIET)
ENTRY_UNARY (vrev64q_mf8, mf8q, mf8q, UNSPEC_REV64, QUIET)
ENTRY_UNARY (vrev32_mf8, mf8, mf8, UNSPEC_REV32, QUIET)
ENTRY_UNARY (vrev32q_mf8, mf8q, mf8q, UNSPEC_REV32, QUIET)
ENTRY_UNARY (vrev16_mf8, mf8, mf8, UNSPEC_REV16, QUIET)
ENTRY_UNARY (vrev16q_mf8, mf8q, mf8q, UNSPEC_REV16, QUIET)
#undef REQUIRED_EXTENSIONS
// set_lane
#define REQUIRED_EXTENSIONS nonstreaming_only (TARGET_SIMD)
ENTRY_BINARY_LANE (vset_lane_mf8, mf8, mf8_scalar, mf8, UNSPEC_SET_LANE, QUIET)
ENTRY_BINARY_LANE (vsetq_lane_mf8, mf8q, mf8_scalar, mf8q, UNSPEC_SET_LANE, QUIET)
#undef REQUIRED_EXTENSIONS
// st1
#define REQUIRED_EXTENSIONS nonstreaming_only (TARGET_SIMD)
ENTRY_STORE (vst1_mf8, mf8_scalar_ptr, mf8, UNSPEC_ST1)
ENTRY_STORE (vst1q_mf8, mf8_scalar_ptr, mf8q, UNSPEC_ST1)
ENTRY_STORE_LANE (vst1_lane_mf8, mf8_scalar_ptr, mf8, UNSPEC_ST1_LANE)
ENTRY_STORE_LANE (vst1q_lane_mf8, mf8_scalar_ptr, mf8q, UNSPEC_ST1_LANE)
#undef REQUIRED_EXTENSIONS
// st<n>
#define REQUIRED_EXTENSIONS nonstreaming_only (TARGET_SIMD)
ENTRY_STORE (vst2_mf8, mf8_scalar_ptr, mf8x2, UNSPEC_ST2)
ENTRY_STORE (vst2q_mf8, mf8_scalar_ptr, mf8qx2, UNSPEC_ST2)
ENTRY_STORE (vst1_mf8_x2, mf8_scalar_ptr, mf8x2, UNSPEC_ST1x2)
ENTRY_STORE (vst1q_mf8_x2, mf8_scalar_ptr, mf8qx2, UNSPEC_ST1x2)
ENTRY_STORE_LANE (vst2_lane_mf8, mf8_scalar_ptr, mf8x2, UNSPEC_ST2_LANE)
ENTRY_STORE_LANE (vst2q_lane_mf8, mf8_scalar_ptr, mf8qx2, UNSPEC_ST2_LANE)
ENTRY_STORE (vst3_mf8, mf8_scalar_ptr, mf8x3, UNSPEC_ST3)
ENTRY_STORE (vst3q_mf8, mf8_scalar_ptr, mf8qx3, UNSPEC_ST3)
ENTRY_STORE (vst1_mf8_x3, mf8_scalar_ptr, mf8x3, UNSPEC_ST1x3)
ENTRY_STORE (vst1q_mf8_x3, mf8_scalar_ptr, mf8qx3, UNSPEC_ST1x3)
ENTRY_STORE_LANE (vst3_lane_mf8, mf8_scalar_ptr, mf8x3, UNSPEC_ST3_LANE)
ENTRY_STORE_LANE (vst3q_lane_mf8, mf8_scalar_ptr, mf8qx3, UNSPEC_ST3_LANE)
ENTRY_STORE (vst4_mf8, mf8_scalar_ptr, mf8x4, UNSPEC_ST4)
ENTRY_STORE (vst4q_mf8, mf8_scalar_ptr, mf8qx4, UNSPEC_ST4)
ENTRY_STORE (vst1_mf8_x4, mf8_scalar_ptr, mf8x4, UNSPEC_ST1x4)
ENTRY_STORE (vst1q_mf8_x4, mf8_scalar_ptr, mf8qx4, UNSPEC_ST1x4)
ENTRY_STORE_LANE (vst4_lane_mf8, mf8_scalar_ptr, mf8x4, UNSPEC_ST4_LANE)
ENTRY_STORE_LANE (vst4q_lane_mf8, mf8_scalar_ptr, mf8qx4, UNSPEC_ST4_LANE)
#undef REQUIRED_EXTENSIONS
// tbl<n>
#define REQUIRED_EXTENSIONS nonstreaming_only (TARGET_SIMD)
ENTRY_BINARY (vtbl1_mf8, mf8, mf8, u8, UNSPEC_TBL, QUIET)
ENTRY_BINARY (vtbl2_mf8, mf8, mf8x2, u8, UNSPEC_TBL, QUIET)
ENTRY_BINARY (vtbl3_mf8, mf8, mf8x3, u8, UNSPEC_TBL, QUIET)
ENTRY_BINARY (vtbl4_mf8, mf8, mf8x4, u8, UNSPEC_TBL, QUIET)
ENTRY_BINARY (vqtbl1_mf8, mf8, mf8q, u8, UNSPEC_TBL, QUIET)
ENTRY_BINARY (vqtbl1q_mf8, mf8q, mf8q, u8q, UNSPEC_TBL, QUIET)
ENTRY_BINARY (vqtbl2_mf8, mf8, mf8qx2, u8, UNSPEC_TBL, QUIET)
ENTRY_BINARY (vqtbl2q_mf8, mf8q, mf8qx2, u8q, UNSPEC_TBL, QUIET)
ENTRY_BINARY (vqtbl3_mf8, mf8, mf8qx3, u8, UNSPEC_TBL, QUIET)
ENTRY_BINARY (vqtbl3q_mf8, mf8q, mf8qx3, u8q, UNSPEC_TBL, QUIET)
ENTRY_BINARY (vqtbl4_mf8, mf8, mf8qx4, u8, UNSPEC_TBL, QUIET)
ENTRY_BINARY (vqtbl4q_mf8, mf8q, mf8qx4, u8q, UNSPEC_TBL, QUIET)
#undef REQUIRED_EXTENSIONS
// tbx<n>
#define REQUIRED_EXTENSIONS nonstreaming_only (TARGET_SIMD)
ENTRY_TERNARY (vtbx1_mf8, mf8, mf8, mf8, u8, UNSPEC_TBX, QUIET)
ENTRY_TERNARY (vtbx2_mf8, mf8, mf8, mf8x2, u8, UNSPEC_TBX, QUIET)
ENTRY_TERNARY (vtbx3_mf8, mf8, mf8, mf8x3, u8, UNSPEC_TBX, QUIET)
ENTRY_TERNARY (vtbx4_mf8, mf8, mf8, mf8x4, u8, UNSPEC_TBX, QUIET)
ENTRY_TERNARY (vqtbx1_mf8, mf8, mf8, mf8q, u8, UNSPEC_TBX, QUIET)
ENTRY_TERNARY (vqtbx1q_mf8, mf8q, mf8q, mf8q, u8q, UNSPEC_TBX, QUIET)
ENTRY_TERNARY (vqtbx2_mf8, mf8, mf8, mf8qx2, u8, UNSPEC_TBX, QUIET)
ENTRY_TERNARY (vqtbx2q_mf8, mf8q, mf8q, mf8qx2, u8q, UNSPEC_TBX, QUIET)
ENTRY_TERNARY (vqtbx3_mf8, mf8, mf8, mf8qx3, u8, UNSPEC_TBX, QUIET)
ENTRY_TERNARY (vqtbx3q_mf8, mf8q, mf8q, mf8qx3, u8q, UNSPEC_TBX, QUIET)
ENTRY_TERNARY (vqtbx4_mf8, mf8, mf8, mf8qx4, u8, UNSPEC_TBX, QUIET)
ENTRY_TERNARY (vqtbx4q_mf8, mf8q, mf8q, mf8qx4, u8q, UNSPEC_TBX, QUIET)
#undef REQUIRED_EXTENSIONS
// trn<n>
#define REQUIRED_EXTENSIONS nonstreaming_only (TARGET_SIMD)
ENTRY_BINARY (vtrn1_mf8, mf8, mf8, mf8, UNSPEC_TRN1, QUIET)
ENTRY_BINARY (vtrn1q_mf8, mf8q, mf8q, mf8q, UNSPEC_TRN1, QUIET)
ENTRY_BINARY (vtrn2_mf8, mf8, mf8, mf8, UNSPEC_TRN2, QUIET)
ENTRY_BINARY (vtrn2q_mf8, mf8q, mf8q, mf8q, UNSPEC_TRN2, QUIET)
ENTRY_BINARY (vtrn_mf8, mf8x2, mf8, mf8, UNSPEC_TRN, QUIET)
ENTRY_BINARY (vtrnq_mf8, mf8qx2, mf8q, mf8q, UNSPEC_TRN, QUIET)
#undef REQUIRED_EXTENSIONS
// uzp<n>
#define REQUIRED_EXTENSIONS nonstreaming_only (TARGET_SIMD)
ENTRY_BINARY (vuzp1_mf8, mf8, mf8, mf8, UNSPEC_UZP1, QUIET)
ENTRY_BINARY (vuzp1q_mf8, mf8q, mf8q, mf8q, UNSPEC_UZP1, QUIET)
ENTRY_BINARY (vuzp2_mf8, mf8, mf8, mf8, UNSPEC_UZP2, QUIET)
ENTRY_BINARY (vuzp2q_mf8, mf8q, mf8q, mf8q, UNSPEC_UZP2, QUIET)
ENTRY_BINARY (vuzp_mf8, mf8x2, mf8, mf8, UNSPEC_UZP, QUIET)
ENTRY_BINARY (vuzpq_mf8, mf8qx2, mf8q, mf8q, UNSPEC_UZP, QUIET)
#undef REQUIRED_EXTENSIONS
// zip<n>
#define REQUIRED_EXTENSIONS nonstreaming_only (TARGET_SIMD)
ENTRY_BINARY (vzip1_mf8, mf8, mf8, mf8, UNSPEC_ZIP1, QUIET)
ENTRY_BINARY (vzip1q_mf8, mf8q, mf8q, mf8q, UNSPEC_ZIP1, QUIET)
ENTRY_BINARY (vzip2_mf8, mf8, mf8, mf8, UNSPEC_ZIP2, QUIET)
ENTRY_BINARY (vzip2q_mf8, mf8q, mf8q, mf8q, UNSPEC_ZIP2, QUIET)
ENTRY_BINARY (vzip_mf8, mf8x2, mf8, mf8, UNSPEC_ZIP, QUIET)
ENTRY_BINARY (vzipq_mf8, mf8qx2, mf8q, mf8q, UNSPEC_ZIP, QUIET)
#undef REQUIRED_EXTENSIONS

View file

@ -112,7 +112,7 @@
}
)
(define_insn "aarch64_dup_lane<mode>"
(define_insn "@aarch64_dup_lane<mode>"
[(set (match_operand:VALL_F16 0 "register_operand" "=w")
(vec_duplicate:VALL_F16
(vec_select:<VEL>
@ -127,7 +127,7 @@
[(set_attr "type" "neon_dup<q>")]
)
(define_insn "aarch64_dup_lane_<vswap_width_name><mode>"
(define_insn "@aarch64_dup_lane_<vswap_width_name><mode>"
[(set (match_operand:VALL_F16_NO_V2Q 0 "register_operand" "=w")
(vec_duplicate:VALL_F16_NO_V2Q
(vec_select:<VEL>
@ -1164,7 +1164,7 @@
[(set_attr "type" "neon_logic<q>")]
)
(define_insn "aarch64_simd_vec_set<mode>"
(define_insn "@aarch64_simd_vec_set<mode>"
[(set (match_operand:VALL_F16 0 "register_operand" "=w,w,w")
(vec_merge:VALL_F16
(vec_duplicate:VALL_F16
@ -1225,7 +1225,7 @@
[(set_attr "type" "neon_ins<q>")]
)
(define_insn "*aarch64_simd_vec_copy_lane_<vswap_width_name><mode>"
(define_insn "@aarch64_simd_vec_copy_lane_<vswap_width_name><mode>"
[(set (match_operand:VALL_F16_NO_V2Q 0 "register_operand" "=w")
(vec_merge:VALL_F16_NO_V2Q
(vec_duplicate:VALL_F16_NO_V2Q
@ -3837,7 +3837,7 @@
}
)
(define_expand "aarch64_simd_bsl<mode>"
(define_expand "@aarch64_simd_bsl<mode>"
[(match_operand:VALLDIF 0 "register_operand")
(match_operand:<V_INT_EQUIV> 1 "register_operand")
(match_operand:VALLDIF 2 "register_operand")
@ -4438,7 +4438,7 @@
;; Form a vector whose least significant half comes from operand 1 and whose
;; most significant half comes from operand 2. This operand order follows
;; arm_neon.h vcombine* intrinsics.
(define_expand "aarch64_combine<mode>"
(define_expand "@aarch64_combine<mode>"
[(match_operand:<VDBL> 0 "register_operand")
(match_operand:VDC 1 "general_operand")
(match_operand:VDC 2 "general_operand")]
@ -6971,7 +6971,7 @@
;; Note, we have constraints for Dz and Z as different expanders
;; have different ideas of what should be passed to this pattern.
(define_insn "aarch64_cm<optab><mode><vczle><vczbe>"
(define_insn "@aarch64_cm<optab><mode><vczle><vczbe>"
[(set (match_operand:<V_INT_EQUIV> 0 "register_operand")
(neg:<V_INT_EQUIV>
(COMPARISONS:<V_INT_EQUIV>
@ -7036,7 +7036,7 @@
;; cm(hs|hi)
(define_insn "aarch64_cm<optab><mode><vczle><vczbe>"
(define_insn "@aarch64_cm<optab><mode><vczle><vczbe>"
[(set (match_operand:<V_INT_EQUIV> 0 "register_operand" "=w")
(neg:<V_INT_EQUIV>
(UCOMPARISONS:<V_INT_EQUIV>
@ -7188,7 +7188,7 @@
;; fcm(eq|ge|gt|le|lt)
(define_insn "aarch64_cm<optab><mode><vczle><vczbe>"
(define_insn "@aarch64_cm<optab><mode><vczle><vczbe>"
[(set (match_operand:<V_INT_EQUIV> 0 "register_operand")
(neg:<V_INT_EQUIV>
(COMPARISONS:<V_INT_EQUIV>
@ -7349,7 +7349,7 @@
[(set_attr "type" "neon_load2_2reg<q>")]
)
(define_insn "aarch64_simd_ld2r<vstruct_elt>"
(define_insn "@aarch64_simd_ld2r<vstruct_elt>"
[(set (match_operand:VSTRUCT_2QD 0 "register_operand" "=w")
(unspec:VSTRUCT_2QD [
(match_operand:BLK 1 "aarch64_simd_struct_operand" "Utv")]
@ -7359,7 +7359,7 @@
[(set_attr "type" "neon_load2_all_lanes<q>")]
)
(define_insn "aarch64_vec_load_lanes<mode>_lane<vstruct_elt>"
(define_insn "@aarch64_vec_load_lanes<mode>_lane<vstruct_elt>"
[(set (match_operand:VSTRUCT_2QD 0 "register_operand" "=w")
(unspec:VSTRUCT_2QD [
(match_operand:BLK 1 "aarch64_simd_struct_operand" "Utv")
@ -7449,7 +7449,7 @@
[(set_attr "type" "neon_load3_3reg<q>")]
)
(define_insn "aarch64_simd_ld3r<vstruct_elt>"
(define_insn "@aarch64_simd_ld3r<vstruct_elt>"
[(set (match_operand:VSTRUCT_3QD 0 "register_operand" "=w")
(unspec:VSTRUCT_3QD [
(match_operand:BLK 1 "aarch64_simd_struct_operand" "Utv")]
@ -7549,7 +7549,7 @@
[(set_attr "type" "neon_load4_4reg<q>")]
)
(define_insn "aarch64_simd_ld4r<vstruct_elt>"
(define_insn "@aarch64_simd_ld4r<vstruct_elt>"
[(set (match_operand:VSTRUCT_4QD 0 "register_operand" "=w")
(unspec:VSTRUCT_4QD [
(match_operand:BLK 1 "aarch64_simd_struct_operand" "Utv")]
@ -7773,7 +7773,7 @@
operands[1] = force_reg (V8DImode, operands[1]);
})
(define_expand "aarch64_ld1x3<vstruct_elt>"
(define_expand "@aarch64_ld1x3<vstruct_elt>"
[(match_operand:VSTRUCT_3QD 0 "register_operand")
(match_operand:DI 1 "register_operand")]
"TARGET_SIMD"
@ -7793,7 +7793,7 @@
[(set_attr "type" "neon_load1_3reg<q>")]
)
(define_expand "aarch64_ld1x4<vstruct_elt>"
(define_expand "@aarch64_ld1x4<vstruct_elt>"
[(match_operand:VSTRUCT_4QD 0 "register_operand" "=w")
(match_operand:DI 1 "register_operand" "r")]
"TARGET_SIMD"
@ -7813,7 +7813,7 @@
[(set_attr "type" "neon_load1_4reg<q>")]
)
(define_expand "aarch64_st1x2<vstruct_elt>"
(define_expand "@aarch64_st1x2<vstruct_elt>"
[(match_operand:DI 0 "register_operand")
(match_operand:VSTRUCT_2QD 1 "register_operand")]
"TARGET_SIMD"
@ -7833,7 +7833,7 @@
[(set_attr "type" "neon_store1_2reg<q>")]
)
(define_expand "aarch64_st1x3<vstruct_elt>"
(define_expand "@aarch64_st1x3<vstruct_elt>"
[(match_operand:DI 0 "register_operand")
(match_operand:VSTRUCT_3QD 1 "register_operand")]
"TARGET_SIMD"
@ -7853,7 +7853,7 @@
[(set_attr "type" "neon_store1_3reg<q>")]
)
(define_expand "aarch64_st1x4<vstruct_elt>"
(define_expand "@aarch64_st1x4<vstruct_elt>"
[(match_operand:DI 0 "register_operand" "")
(match_operand:VSTRUCT_4QD 1 "register_operand" "")]
"TARGET_SIMD"
@ -8220,7 +8220,7 @@
[(set_attr "type" "neon_load1_4reg<q>")]
)
(define_expand "aarch64_ld<nregs><vstruct_elt>"
(define_expand "@aarch64_ld<nregs><vstruct_elt>"
[(match_operand:VSTRUCT_D 0 "register_operand")
(match_operand:DI 1 "register_operand")]
"TARGET_SIMD"
@ -8230,7 +8230,7 @@
DONE;
})
(define_expand "aarch64_ld1<VALL_F16:mode>"
(define_expand "@aarch64_ld1<VALL_F16:mode>"
[(match_operand:VALL_F16 0 "register_operand")
(match_operand:DI 1 "register_operand")]
"TARGET_SIMD"
@ -8245,7 +8245,7 @@
DONE;
})
(define_expand "aarch64_ld<nregs><vstruct_elt>"
(define_expand "@aarch64_ld<nregs><vstruct_elt>"
[(match_operand:VSTRUCT_Q 0 "register_operand")
(match_operand:DI 1 "register_operand")]
"TARGET_SIMD"
@ -8255,7 +8255,7 @@
DONE;
})
(define_expand "aarch64_ld1x2<vstruct_elt>"
(define_expand "@aarch64_ld1x2<vstruct_elt>"
[(match_operand:VSTRUCT_2QD 0 "register_operand")
(match_operand:DI 1 "register_operand")]
"TARGET_SIMD"
@ -8267,7 +8267,7 @@
DONE;
})
(define_expand "aarch64_ld<nregs>_lane<vstruct_elt>"
(define_expand "@aarch64_ld<nregs>_lane<vstruct_elt>"
[(match_operand:VSTRUCT_QD 0 "register_operand")
(match_operand:DI 1 "register_operand")
(match_operand:VSTRUCT_QD 2 "register_operand")
@ -8411,7 +8411,7 @@
;; This instruction's pattern is generated directly by
;; aarch64_expand_vec_perm_const, so any changes to the pattern would
;; need corresponding changes there.
(define_insn "aarch64_<PERMUTE:perm_insn><mode><vczle><vczbe>"
(define_insn "@aarch64_<PERMUTE:perm_insn><mode><vczle><vczbe>"
[(set (match_operand:VALL_F16 0 "register_operand" "=w")
(unspec:VALL_F16 [(match_operand:VALL_F16 1 "register_operand" "w")
(match_operand:VALL_F16 2 "register_operand" "w")]
@ -8437,7 +8437,7 @@
;; aarch64_expand_vec_perm_const, so any changes to the pattern would
;; need corresponding changes there. Note that the immediate (third)
;; operand is a lane index not a byte index.
(define_insn "aarch64_ext<mode>"
(define_insn "@aarch64_ext<mode>"
[(set (match_operand:VALL_F16 0 "register_operand" "=w")
(unspec:VALL_F16 [(match_operand:VALL_F16 1 "register_operand" "w")
(match_operand:VALL_F16 2 "register_operand" "w")
@ -8455,7 +8455,7 @@
;; This instruction's pattern is generated directly by
;; aarch64_expand_vec_perm_const, so any changes to the pattern would
;; need corresponding changes there.
(define_insn "aarch64_rev<REVERSE:rev_op><mode><vczle><vczbe>"
(define_insn "@aarch64_rev<REVERSE:rev_op><mode><vczle><vczbe>"
[(set (match_operand:VALL_F16 0 "register_operand" "=w")
(unspec:VALL_F16 [(match_operand:VALL_F16 1 "register_operand" "w")]
REVERSE))]
@ -8524,7 +8524,7 @@
[(set_attr "type" "neon_store1_4reg")]
)
(define_expand "aarch64_st<nregs><vstruct_elt>"
(define_expand "@aarch64_st<nregs><vstruct_elt>"
[(match_operand:DI 0 "register_operand")
(match_operand:VSTRUCT_D 1 "register_operand")]
"TARGET_SIMD"
@ -8534,7 +8534,7 @@
DONE;
})
(define_expand "aarch64_st<nregs><vstruct_elt>"
(define_expand "@aarch64_st<nregs><vstruct_elt>"
[(match_operand:DI 0 "register_operand")
(match_operand:VSTRUCT_Q 1 "register_operand")]
"TARGET_SIMD"
@ -8544,7 +8544,7 @@
DONE;
})
(define_expand "aarch64_st<nregs>_lane<vstruct_elt>"
(define_expand "@aarch64_st<nregs>_lane<vstruct_elt>"
[(match_operand:DI 0 "register_operand")
(match_operand:VSTRUCT_QD 1 "register_operand")
(match_operand:SI 2 "immediate_operand")]
@ -8560,7 +8560,7 @@
DONE;
})
(define_expand "aarch64_st1<VALL_F16:mode>"
(define_expand "@aarch64_st1<VALL_F16:mode>"
[(match_operand:DI 0 "register_operand")
(match_operand:VALL_F16 1 "register_operand")]
"TARGET_SIMD"

View file

@ -1802,7 +1802,7 @@ aarch64_ldn_stn_vectors (machine_mode mode)
/* Given an Advanced SIMD vector mode MODE and a tuple size NELEMS, return the
corresponding vector structure mode. */
static opt_machine_mode
opt_machine_mode
aarch64_advsimd_vector_array_mode (machine_mode mode,
unsigned HOST_WIDE_INT nelems)
{

View file

@ -198,8 +198,10 @@
UNSPEC_AUTIB1716
UNSPEC_AUTIASP
UNSPEC_AUTIBSP
UNSPEC_BSL
UNSPEC_CALLEE_ABI
UNSPEC_CASESI
UNSPEC_COMBINE
UNSPEC_CPYMEM
UNSPEC_CRC32B
UNSPEC_CRC32CB
@ -209,6 +211,8 @@
UNSPEC_CRC32H
UNSPEC_CRC32W
UNSPEC_CRC32X
UNSPEC_DUP
UNSPEC_DUP_LANE
UNSPEC_FCVTZS
UNSPEC_FCVTZU
UNSPEC_FJCVTZS
@ -227,6 +231,7 @@
UNSPEC_FRINTP
UNSPEC_FRINTX
UNSPEC_FRINTZ
UNSPEC_GET_LANE
UNSPEC_GOTSMALLPIC
UNSPEC_GOTSMALLPIC28K
UNSPEC_GOTSMALLTLS
@ -236,6 +241,10 @@
UNSPEC_LDP_FST
UNSPEC_LDP_SND
UNSPEC_LD1
UNSPEC_LD1_DUP
UNSPEC_LD1x2
UNSPEC_LD1x3
UNSPEC_LD1x4
UNSPEC_LD2
UNSPEC_LD2_DREG
UNSPEC_LD2_DUP
@ -265,12 +274,17 @@
UNSPEC_REV
UNSPEC_SADALP
UNSPEC_SCVTF
UNSPEC_SET_LANE
UNSPEC_SETMEM
UNSPEC_SISD_NEG
UNSPEC_SISD_SSHL
UNSPEC_SISD_USHL
UNSPEC_SSHL_2S
UNSPEC_ST1
UNSPEC_ST1_LANE
UNSPEC_ST1x2
UNSPEC_ST1x3
UNSPEC_ST1x4
UNSPEC_ST2
UNSPEC_ST3
UNSPEC_ST4
@ -314,6 +328,8 @@
UNSPEC_UNPACKSLO
UNSPEC_UNPACKULO
UNSPEC_PACK
UNSPEC_VCREATE
UNSPEC_VEC_COPY
UNSPEC_WHILEGE
UNSPEC_WHILEGT
UNSPEC_WHILEHI

View file

@ -1095,6 +1095,7 @@
UNSPEC_SUBHNB ; Used in aarch64-sve2.md.
UNSPEC_SUBHNT ; Used in aarch64-sve2.md.
UNSPEC_TBL2 ; Used in aarch64-sve2.md.
UNSPEC_TRN ; Used in aarch64-builtins.cc
UNSPEC_UABDLB ; Used in aarch64-sve2.md.
UNSPEC_UABDLT ; Used in aarch64-sve2.md.
UNSPEC_UADDLB ; Used in aarch64-sve2.md.

View file

@ -7,6 +7,7 @@
#include <inttypes.h>
/* helper type, to help write floating point results in integer form. */
typedef uint8_t hmfloat8_t;
typedef uint16_t hfloat16_t;
typedef uint32_t hfloat32_t;
typedef uint64_t hfloat64_t;
@ -38,10 +39,24 @@ extern size_t strlen(const char *);
Use this macro to guard against them. */
#ifdef __aarch64__
#define AARCH64_ONLY(X) X
#define MFLOAT8_SUPPORTED 1
#else
#define AARCH64_ONLY(X)
#define MFLOAT8_SUPPORTED 0
#endif
#if MFLOAT8_SUPPORTED
#define MFLOAT8_ONLY(X) X
#define MFLOAT8(X) (((union { uint8_t x; mfloat8_t y; }) { X }).y)
#define CONVERT(T, X) \
((T) _Generic ((T){}, mfloat8_t: MFLOAT8(X), default: X))
#else
#define MFLOAT8_ONLY(X)
#define CONVERT(T, X) ((T) X)
#endif
#define BITEQUAL(X, Y) (__builtin_memcmp (&X, &Y, sizeof(X)) == 0)
#define xSTR(X) #X
#define STR(X) xSTR(X)
@ -182,6 +197,9 @@ static ARRAY(result, poly, 16, 4);
#if defined (__ARM_FEATURE_CRYPTO)
static ARRAY(result, poly, 64, 1);
#endif
#if MFLOAT8_SUPPORTED
static ARRAY(result, mfloat, 8, 8);
#endif
#if defined (__ARM_FP16_FORMAT_IEEE) || defined (__ARM_FP16_FORMAT_ALTERNATIVE)
static ARRAY(result, float, 16, 4);
#endif
@ -202,6 +220,9 @@ static ARRAY(result, poly, 16, 8);
#if defined (__ARM_FEATURE_CRYPTO)
static ARRAY(result, poly, 64, 2);
#endif
#if MFLOAT8_SUPPORTED
static ARRAY(result, mfloat, 8, 16);
#endif
#if defined (__ARM_FP16_FORMAT_IEEE) || defined (__ARM_FP16_FORMAT_ALTERNATIVE)
static ARRAY(result, float, 16, 8);
#endif
@ -222,6 +243,9 @@ extern ARRAY(expected, uint, 32, 2);
extern ARRAY(expected, uint, 64, 1);
extern ARRAY(expected, poly, 8, 8);
extern ARRAY(expected, poly, 16, 4);
#if MFLOAT8_SUPPORTED
extern ARRAY(expected, hmfloat, 8, 8);
#endif
extern ARRAY(expected, hfloat, 16, 4);
extern ARRAY(expected, hfloat, 32, 2);
extern ARRAY(expected, hfloat, 64, 1);
@ -235,6 +259,9 @@ extern ARRAY(expected, uint, 32, 4);
extern ARRAY(expected, uint, 64, 2);
extern ARRAY(expected, poly, 8, 16);
extern ARRAY(expected, poly, 16, 8);
#if MFLOAT8_SUPPORTED
extern ARRAY(expected, hmfloat, 8, 16);
#endif
extern ARRAY(expected, hfloat, 16, 8);
extern ARRAY(expected, hfloat, 32, 4);
extern ARRAY(expected, hfloat, 64, 2);
@ -251,6 +278,8 @@ extern ARRAY(expected, hfloat, 64, 2);
CHECK(test_name, uint, 64, 1, PRIx64, EXPECTED, comment); \
CHECK_POLY(test_name, poly, 8, 8, PRIx8, EXPECTED, comment); \
CHECK_POLY(test_name, poly, 16, 4, PRIx16, EXPECTED, comment); \
MFLOAT8_ONLY(CHECK_FP(test_name, mfloat, 8, 8, PRIx8, \
EXPECTED, comment);) \
CHECK_FP(test_name, float, 32, 2, PRIx32, EXPECTED, comment); \
\
CHECK(test_name, int, 8, 16, PRIx8, EXPECTED, comment); \
@ -263,6 +292,8 @@ extern ARRAY(expected, hfloat, 64, 2);
CHECK(test_name, uint, 64, 2, PRIx64, EXPECTED, comment); \
CHECK_POLY(test_name, poly, 8, 16, PRIx8, EXPECTED, comment); \
CHECK_POLY(test_name, poly, 16, 8, PRIx16, EXPECTED, comment); \
MFLOAT8_ONLY(CHECK_FP(test_name, mfloat, 8, 16, PRIx8, \
EXPECTED, comment);) \
CHECK_FP(test_name, float, 32, 4, PRIx32, EXPECTED, comment); \
} \
@ -372,6 +403,9 @@ static void clean_results (void)
#if defined (__ARM_FEATURE_CRYPTO)
CLEAN(result, poly, 64, 1);
#endif
#if MFLOAT8_SUPPORTED
CLEAN(result, mfloat, 8, 8);
#endif
#if defined (__ARM_FP16_FORMAT_IEEE) || defined (__ARM_FP16_FORMAT_ALTERNATIVE)
CLEAN(result, float, 16, 4);
#endif
@ -390,6 +424,9 @@ static void clean_results (void)
#if defined (__ARM_FEATURE_CRYPTO)
CLEAN(result, poly, 64, 2);
#endif
#if MFLOAT8_SUPPORTED
CLEAN(result, mfloat, 8, 16);
#endif
#if defined (__ARM_FP16_FORMAT_IEEE) || defined (__ARM_FP16_FORMAT_ALTERNATIVE)
CLEAN(result, float, 16, 8);
#endif
@ -460,6 +497,7 @@ static void clean_results (void)
DECL_VARIABLE(VAR, poly, 8, 8); \
DECL_VARIABLE(VAR, poly, 16, 4); \
DECL_VARIABLE_CRYPTO(VAR, poly, 64, 1); \
MFLOAT8_ONLY(DECL_VARIABLE(VAR, mfloat, 8, 8);) \
DECL_VARIABLE(VAR, float, 16, 4); \
DECL_VARIABLE(VAR, float, 32, 2)
#else
@ -480,6 +518,7 @@ static void clean_results (void)
DECL_VARIABLE(VAR, poly, 8, 16); \
DECL_VARIABLE(VAR, poly, 16, 8); \
DECL_VARIABLE_CRYPTO(VAR, poly, 64, 2); \
MFLOAT8_ONLY(DECL_VARIABLE(VAR, mfloat, 8, 16);) \
DECL_VARIABLE(VAR, float, 16, 8); \
DECL_VARIABLE(VAR, float, 32, 4); \
AARCH64_ONLY(DECL_VARIABLE(VAR, float, 64, 2))
@ -490,6 +529,7 @@ static void clean_results (void)
DECL_VARIABLE(VAR, poly, 8, 16); \
DECL_VARIABLE(VAR, poly, 16, 8); \
DECL_VARIABLE_CRYPTO(VAR, poly, 64, 2); \
MFLOAT8_ONLY(DECL_VARIABLE(VAR, mfloat, 8, 16);) \
DECL_VARIABLE(VAR, float, 32, 4); \
AARCH64_ONLY(DECL_VARIABLE(VAR, float, 64, 2))
#endif

View file

@ -122,6 +122,10 @@ PAD(buffer_pad, uint, 64, 1);
VECT_VAR_DECL_INIT(buffer, poly, 64, 1);
PAD(buffer_pad, poly, 64, 1);
#endif
#if MFLOAT8_SUPPORTED
VECT_VAR_DECL(buffer, mfloat, 8, 8)[8];
PAD(buffer_pad, mfloat, 8, 8);
#endif
#if defined (__ARM_FP16_FORMAT_IEEE) || defined (__ARM_FP16_FORMAT_ALTERNATIVE)
VECT_VAR_DECL_INIT(buffer, float, 16, 4);
PAD(buffer_pad, float, 16, 4);
@ -152,6 +156,10 @@ PAD(buffer_pad, poly, 16, 8);
VECT_VAR_DECL_INIT(buffer, poly, 64, 2);
PAD(buffer_pad, poly, 64, 2);
#endif
#if MFLOAT8_SUPPORTED
VECT_VAR_DECL(buffer, mfloat, 8, 16)[16];
PAD(buffer_pad, mfloat, 8, 16);
#endif
#if defined (__ARM_FP16_FORMAT_IEEE) || defined (__ARM_FP16_FORMAT_ALTERNATIVE)
VECT_VAR_DECL_INIT(buffer, float, 16, 8);
PAD(buffer_pad, float, 16, 8);
@ -190,6 +198,10 @@ VECT_VAR_DECL(buffer_dup_pad, poly, 16, 4);
VECT_VAR_DECL_INIT4(buffer_dup, poly, 64, 1);
VECT_VAR_DECL(buffer_dup_pad, poly, 64, 1);
#endif
#if MFLOAT8_SUPPORTED
VECT_VAR_DECL(buffer_dup, mfloat, 8, 8)[8];
PAD(buffer_dup_pad, mfloat, 8, 8);
#endif
#if defined (__ARM_FP16_FORMAT_IEEE) || defined (__ARM_FP16_FORMAT_ALTERNATIVE)
VECT_VAR_DECL_INIT4(buffer_dup, float, 16, 4);
VECT_VAR_DECL(buffer_dup_pad, float, 16, 4);
@ -221,9 +233,26 @@ VECT_VAR_DECL(buffer_dup_pad, poly, 16, 8);
VECT_VAR_DECL_INIT4(buffer_dup, poly, 64, 2);
VECT_VAR_DECL(buffer_dup_pad, poly, 64, 2);
#endif
#if MFLOAT8_SUPPORTED
VECT_VAR_DECL(buffer_dup, mfloat, 8, 16)[16];
PAD(buffer_dup_pad, mfloat, 8, 16);
#endif
#if defined (__ARM_FP16_FORMAT_IEEE) || defined (__ARM_FP16_FORMAT_ALTERNATIVE)
VECT_VAR_DECL_INIT(buffer_dup, float, 16, 8);
VECT_VAR_DECL(buffer_dup_pad, float, 16, 8);
#endif
VECT_VAR_DECL_INIT(buffer_dup, float, 32, 4);
VECT_VAR_DECL(buffer_dup_pad, float, 32, 4);
#if MFLOAT8_SUPPORTED
static void __attribute__((constructor))
copy_mfloat8 ()
{
memcpy (VECT_VAR(buffer, mfloat, 8, 8), VECT_VAR(buffer, uint, 8, 8), 8);
memcpy (VECT_VAR(buffer, mfloat, 8, 16), VECT_VAR(buffer, uint, 8, 16), 16);
memcpy (VECT_VAR(buffer_dup, mfloat, 8, 8),
VECT_VAR(buffer_dup, uint, 8, 8), 8);
memcpy (VECT_VAR(buffer_dup, mfloat, 8, 16),
VECT_VAR(buffer_dup, uint, 8, 16), 16);
}
#endif

View file

@ -16,6 +16,10 @@ VECT_VAR_DECL(expected,uint,64,1) [] = { 0xfffffff1 };
VECT_VAR_DECL(expected,poly,8,8) [] = { 0xf3, 0xf3, 0xf3, 0xf3,
0xf7, 0xf7, 0xf7, 0xf7 };
VECT_VAR_DECL(expected,poly,16,4) [] = { 0xfff0, 0xfff0, 0xfff2, 0xfff2 };
#if MFLOAT8_SUPPORTED
VECT_VAR_DECL(expected,hmfloat,8,8) [] = { 0xfa, 0xfa, 0xfa, 0xfa,
0xfe, 0xfe, 0xfe, 0xfe };
#endif
#if defined (FP16_SUPPORTED)
VECT_VAR_DECL (expected, hfloat, 16, 4) [] = { 0xcc09, 0xcb89,
0xcb09, 0xca89 };
@ -47,6 +51,12 @@ VECT_VAR_DECL(expected,poly,8,16) [] = { 0xf3, 0xf3, 0xf3, 0xf3,
0xf7, 0xf7, 0xf7, 0xf7 };
VECT_VAR_DECL(expected,poly,16,8) [] = { 0xfff0, 0xfff0, 0xfff2, 0xfff2,
0xfff4, 0xfff4, 0xfff6, 0xfff6 };
#if MFLOAT8_SUPPORTED
VECT_VAR_DECL(expected,hmfloat,8,16) [] = { 0xf1, 0xf1, 0xf1, 0xf1,
0xf5, 0xf5, 0xf5, 0xf5,
0xf1, 0xf1, 0xf1, 0xf1,
0xf5, 0xf5, 0xf5, 0xf5 };
#endif
#if defined (FP16_SUPPORTED)
VECT_VAR_DECL (expected, hfloat, 16, 8) [] = { 0xcc09, 0xcb89,
0xcb09, 0xca89,
@ -76,6 +86,10 @@ void exec_vbsl (void)
clean_results ();
TEST_MACRO_ALL_VARIANTS_2_5(VLOAD, vector, buffer);
#if MFLOAT8_SUPPORTED
VLOAD(vector, buffer, , mfloat, mf, 8, 8);
VLOAD(vector, buffer, q, mfloat, mf, 8, 16);
#endif
#if defined (FP16_SUPPORTED)
VLOAD(vector, buffer, , float, f, 16, 4);
VLOAD(vector, buffer, q, float, f, 16, 8);
@ -94,6 +108,7 @@ void exec_vbsl (void)
VDUP(vector2, , uint, u, 16, 4, 0xFFF2);
VDUP(vector2, , uint, u, 32, 2, 0xFFFFFFF0);
VDUP(vector2, , uint, u, 64, 1, 0xFFFFFFF3);
MFLOAT8_ONLY(VDUP(vector2, , mfloat, mf, 8, 8, MFLOAT8(0xca));)
#if defined (FP16_SUPPORTED)
VDUP(vector2, , float, f, 16, 4, -2.4f); /* -2.4f is 0xC0CD. */
#endif
@ -111,6 +126,7 @@ void exec_vbsl (void)
VDUP(vector2, q, uint, u, 64, 2, 0xFFFFFFF3);
VDUP(vector2, q, poly, p, 8, 16, 0xF3);
VDUP(vector2, q, poly, p, 16, 8, 0xFFF2);
MFLOAT8_ONLY(VDUP(vector2, q, mfloat, mf, 8, 16, MFLOAT8(0x55));)
#if defined (FP16_SUPPORTED)
VDUP(vector2, q, float, f, 16, 8, -2.4f);
#endif
@ -131,6 +147,10 @@ void exec_vbsl (void)
TEST_VBSL(uint, , poly, p, 16, 4);
TEST_VBSL(uint, q, poly, p, 8, 16);
TEST_VBSL(uint, q, poly, p, 16, 8);
#if MFLOAT8_SUPPORTED
TEST_VBSL(uint, , mfloat, mf, 8, 8);
TEST_VBSL(uint, q, mfloat, mf, 8, 16);
#endif
#if defined (FP16_SUPPORTED)
TEST_VBSL(uint, , float, f, 16, 4);
TEST_VBSL(uint, q, float, f, 16, 8);

View file

@ -25,6 +25,12 @@ VECT_VAR_DECL(expected,poly,8,16) [] = { 0xf0, 0xf1, 0xf2, 0xf3,
0x55, 0x55, 0x55, 0x55 };
VECT_VAR_DECL(expected,poly,16,8) [] = { 0xfff0, 0xfff1, 0xfff2, 0xfff3,
0x66, 0x66, 0x66, 0x66 };
#if MFLOAT8_SUPPORTED
VECT_VAR_DECL(expected,hmfloat,8,16) [] = { 0xf0, 0xf1, 0xf2, 0xf3,
0xf4, 0xf5, 0xf6, 0xf7,
0xcc, 0xcc, 0xcc, 0xcc,
0xcc, 0xcc, 0xcc, 0xcc };
#endif
VECT_VAR_DECL(expected,hfloat,32,4) [] = { 0xc1800000, 0xc1700000,
0x40533333, 0x40533333 };
VECT_VAR_DECL(expected,hfloat,16,8) [] = { 0xcc00, 0xcb80, 0xcb00, 0xca80,
@ -46,6 +52,7 @@ void exec_vcombine (void)
/* Initialize input "vector64_a" from "buffer". */
TEST_MACRO_64BITS_VARIANTS_2_5(VLOAD, vector64_a, buffer);
MFLOAT8_ONLY(VLOAD(vector64_a, buffer, , mfloat, mf, 8, 8);)
#if defined (__ARM_FP16_FORMAT_IEEE) || defined (__ARM_FP16_FORMAT_ALTERNATIVE)
VLOAD(vector64_a, buffer, , float, f, 16, 4);
#endif
@ -62,6 +69,7 @@ void exec_vcombine (void)
VDUP(vector64_b, , uint, u, 64, 1, 0x88);
VDUP(vector64_b, , poly, p, 8, 8, 0x55);
VDUP(vector64_b, , poly, p, 16, 4, 0x66);
MFLOAT8_ONLY(VDUP(vector64_b, , mfloat, mf, 8, 8, MFLOAT8(0xcc));)
#if defined (__ARM_FP16_FORMAT_IEEE) || defined (__ARM_FP16_FORMAT_ALTERNATIVE)
VDUP(vector64_b, , float, f, 16, 4, 2.25);
#endif
@ -80,6 +88,7 @@ void exec_vcombine (void)
TEST_VCOMBINE(uint, u, 64, 1, 2);
TEST_VCOMBINE(poly, p, 8, 8, 16);
TEST_VCOMBINE(poly, p, 16, 4, 8);
MFLOAT8_ONLY(TEST_VCOMBINE(mfloat, mf, 8, 8, 16);)
#if defined (__ARM_FP16_FORMAT_IEEE) || defined (__ARM_FP16_FORMAT_ALTERNATIVE)
TEST_VCOMBINE(float, f, 16, 4, 8);
#endif
@ -95,6 +104,7 @@ void exec_vcombine (void)
CHECK(TEST_MSG, uint, 64, 2, PRIx64, expected, "");
CHECK_POLY(TEST_MSG, poly, 8, 16, PRIx8, expected, "");
CHECK_POLY(TEST_MSG, poly, 16, 8, PRIx16, expected, "");
MFLOAT8_ONLY(CHECK_FP(TEST_MSG, mfloat, 8, 16, PRIx16, expected, "");)
#if defined (__ARM_FP16_FORMAT_IEEE) || defined (__ARM_FP16_FORMAT_ALTERNATIVE)
CHECK_FP(TEST_MSG, float, 16, 8, PRIx16, expected, "");
#endif

View file

@ -16,6 +16,10 @@ VECT_VAR_DECL(expected,uint,64,1) [] = { 0x123456789abcdef0 };
VECT_VAR_DECL(expected,poly,8,8) [] = { 0xf0, 0xde, 0xbc, 0x9a,
0x78, 0x56, 0x34, 0x12 };
VECT_VAR_DECL(expected,poly,16,4) [] = { 0xdef0, 0x9abc, 0x5678, 0x1234 };
#if MFLOAT8_SUPPORTED
VECT_VAR_DECL(expected,hmfloat,8,8) [] = { 0xf0, 0xde, 0xbc, 0x9a,
0x78, 0x56, 0x34, 0x12 };
#endif
VECT_VAR_DECL(expected,hfloat,16,4) [] = { 0xdef0, 0x9abc, 0x5678, 0x1234 };
VECT_VAR_DECL(expected,hfloat,32,2) [] = { 0x9abcdef0, 0x12345678 };
@ -39,6 +43,7 @@ FNNAME (INSN_NAME)
DECL_VAL(val, int, 16, 4);
DECL_VAL(val, int, 32, 2);
DECL_VAL(val, int, 64, 1);
MFLOAT8_ONLY(DECL_VAL(val, mfloat, 8, 8);)
#if defined (__ARM_FP16_FORMAT_IEEE) || defined (__ARM_FP16_FORMAT_ALTERNATIVE)
DECL_VAL(val, float, 16, 4);
#endif
@ -54,6 +59,7 @@ FNNAME (INSN_NAME)
DECL_VARIABLE(vector_res, int, 16, 4);
DECL_VARIABLE(vector_res, int, 32, 2);
DECL_VARIABLE(vector_res, int, 64, 1);
MFLOAT8_ONLY(DECL_VARIABLE(vector_res, mfloat, 8, 8);)
#if defined (__ARM_FP16_FORMAT_IEEE) || defined (__ARM_FP16_FORMAT_ALTERNATIVE)
DECL_VARIABLE(vector_res, float, 16, 4);
#endif
@ -72,6 +78,7 @@ FNNAME (INSN_NAME)
VECT_VAR(val, int, 16, 4) = 0x123456789abcdef0LL;
VECT_VAR(val, int, 32, 2) = 0x123456789abcdef0LL;
VECT_VAR(val, int, 64, 1) = 0x123456789abcdef0LL;
MFLOAT8_ONLY(VECT_VAR(val, mfloat, 8, 8) = 0x123456789abcdef0LL;)
#if defined (__ARM_FP16_FORMAT_IEEE) || defined (__ARM_FP16_FORMAT_ALTERNATIVE)
VECT_VAR(val, float, 16, 4) = 0x123456789abcdef0LL;
#endif
@ -86,6 +93,7 @@ FNNAME (INSN_NAME)
TEST_VCREATE(int, s, 8, 8);
TEST_VCREATE(int, s, 16, 4);
TEST_VCREATE(int, s, 32, 2);
MFLOAT8_ONLY(TEST_VCREATE(mfloat, mf, 8, 8);)
#if defined (__ARM_FP16_FORMAT_IEEE) || defined (__ARM_FP16_FORMAT_ALTERNATIVE)
TEST_VCREATE(float, f, 16, 4);
#endif
@ -108,6 +116,7 @@ FNNAME (INSN_NAME)
CHECK(TEST_MSG, uint, 64, 1, PRIx64, expected, "");
CHECK_POLY(TEST_MSG, poly, 8, 8, PRIx8, expected, "");
CHECK_POLY(TEST_MSG, poly, 16, 4, PRIx16, expected, "");
MFLOAT8_ONLY(CHECK_FP(TEST_MSG, mfloat, 8, 8, PRIx16, expected, "");)
#if defined (__ARM_FP16_FORMAT_IEEE) || defined (__ARM_FP16_FORMAT_ALTERNATIVE)
CHECK_FP(TEST_MSG, float, 16, 4, PRIx16, expected, "");
#endif

View file

@ -19,6 +19,10 @@ VECT_VAR_DECL(expected0,uint,64,1) [] = { 0xfffffffffffffff0 };
VECT_VAR_DECL(expected0,poly,8,8) [] = { 0xf0, 0xf0, 0xf0, 0xf0,
0xf0, 0xf0, 0xf0, 0xf0 };
VECT_VAR_DECL(expected0,poly,16,4) [] = { 0xfff0, 0xfff0, 0xfff0, 0xfff0 };
#if MFLOAT8_SUPPORTED
VECT_VAR_DECL(expected0,hmfloat,8,8) [] = { 0xf0, 0xf0, 0xf0, 0xf0,
0xf0, 0xf0, 0xf0, 0xf0 };
#endif
#if defined (FP16_SUPPORTED)
VECT_VAR_DECL (expected0, hfloat, 16, 4) [] = { 0xcc00, 0xcc00,
0xcc00, 0xcc00 };
@ -50,6 +54,12 @@ VECT_VAR_DECL(expected0,poly,8,16) [] = { 0xf0, 0xf0, 0xf0, 0xf0,
0xf0, 0xf0, 0xf0, 0xf0 };
VECT_VAR_DECL(expected0,poly,16,8) [] = { 0xfff0, 0xfff0, 0xfff0, 0xfff0,
0xfff0, 0xfff0, 0xfff0, 0xfff0 };
#if MFLOAT8_SUPPORTED
VECT_VAR_DECL(expected0,hmfloat,8,16) [] = { 0xf0, 0xf0, 0xf0, 0xf0,
0xf0, 0xf0, 0xf0, 0xf0,
0xf0, 0xf0, 0xf0, 0xf0,
0xf0, 0xf0, 0xf0, 0xf0 };
#endif
#if defined (FP16_SUPPORTED)
VECT_VAR_DECL (expected0, hfloat, 16, 8) [] = { 0xcc00, 0xcc00,
0xcc00, 0xcc00,
@ -73,6 +83,10 @@ VECT_VAR_DECL(expected1,uint,64,1) [] = { 0xfffffffffffffff1 };
VECT_VAR_DECL(expected1,poly,8,8) [] = { 0xf1, 0xf1, 0xf1, 0xf1,
0xf1, 0xf1, 0xf1, 0xf1 };
VECT_VAR_DECL(expected1,poly,16,4) [] = { 0xfff1, 0xfff1, 0xfff1, 0xfff1 };
#if MFLOAT8_SUPPORTED
VECT_VAR_DECL(expected1,hmfloat,8,8) [] = { 0xf1, 0xf1, 0xf1, 0xf1,
0xf1, 0xf1, 0xf1, 0xf1 };
#endif
#if defined (FP16_SUPPORTED)
VECT_VAR_DECL (expected1, hfloat, 16, 4) [] = { 0xcb80, 0xcb80,
0xcb80, 0xcb80 };
@ -104,6 +118,12 @@ VECT_VAR_DECL(expected1,poly,8,16) [] = { 0xf1, 0xf1, 0xf1, 0xf1,
0xf1, 0xf1, 0xf1, 0xf1 };
VECT_VAR_DECL(expected1,poly,16,8) [] = { 0xfff1, 0xfff1, 0xfff1, 0xfff1,
0xfff1, 0xfff1, 0xfff1, 0xfff1 };
#if MFLOAT8_SUPPORTED
VECT_VAR_DECL(expected1,hmfloat,8,16) [] = { 0xf1, 0xf1, 0xf1, 0xf1,
0xf1, 0xf1, 0xf1, 0xf1,
0xf1, 0xf1, 0xf1, 0xf1,
0xf1, 0xf1, 0xf1, 0xf1 };
#endif
#if defined (FP16_SUPPORTED)
VECT_VAR_DECL (expected1, hfloat, 16, 8) [] = { 0xcb80, 0xcb80,
0xcb80, 0xcb80,
@ -127,6 +147,10 @@ VECT_VAR_DECL(expected2,uint,64,1) [] = { 0xfffffffffffffff2 };
VECT_VAR_DECL(expected2,poly,8,8) [] = { 0xf2, 0xf2, 0xf2, 0xf2,
0xf2, 0xf2, 0xf2, 0xf2 };
VECT_VAR_DECL(expected2,poly,16,4) [] = { 0xfff2, 0xfff2, 0xfff2, 0xfff2 };
#if MFLOAT8_SUPPORTED
VECT_VAR_DECL(expected2,hmfloat,8,8) [] = { 0xf2, 0xf2, 0xf2, 0xf2,
0xf2, 0xf2, 0xf2, 0xf2 };
#endif
#if defined (FP16_SUPPORTED)
VECT_VAR_DECL (expected2, hfloat, 16, 4) [] = { 0xcb00, 0xcb00,
0xcb00, 0xcb00 };
@ -158,6 +182,12 @@ VECT_VAR_DECL(expected2,poly,8,16) [] = { 0xf2, 0xf2, 0xf2, 0xf2,
0xf2, 0xf2, 0xf2, 0xf2 };
VECT_VAR_DECL(expected2,poly,16,8) [] = { 0xfff2, 0xfff2, 0xfff2, 0xfff2,
0xfff2, 0xfff2, 0xfff2, 0xfff2 };
#if MFLOAT8_SUPPORTED
VECT_VAR_DECL(expected2,hmfloat,8,16) [] = { 0xf2, 0xf2, 0xf2, 0xf2,
0xf2, 0xf2, 0xf2, 0xf2,
0xf2, 0xf2, 0xf2, 0xf2,
0xf2, 0xf2, 0xf2, 0xf2 };
#endif
#if defined (FP16_SUPPORTED)
VECT_VAR_DECL (expected2, hfloat, 16, 8) [] = { 0xcb00, 0xcb00,
0xcb00, 0xcb00,
@ -201,6 +231,7 @@ void exec_vdup_vmov (void)
TEST_VDUP(, uint, u, 64, 1);
TEST_VDUP(, poly, p, 8, 8);
TEST_VDUP(, poly, p, 16, 4);
MFLOAT8_ONLY(TEST_VDUP(, mfloat, mf, 8, 8));
#if defined (FP16_SUPPORTED)
TEST_VDUP(, float, f, 16, 4);
#endif
@ -216,6 +247,7 @@ void exec_vdup_vmov (void)
TEST_VDUP(q, uint, u, 64, 2);
TEST_VDUP(q, poly, p, 8, 16);
TEST_VDUP(q, poly, p, 16, 8);
MFLOAT8_ONLY(TEST_VDUP(q, mfloat, mf, 8, 16));
#if defined (FP16_SUPPORTED)
TEST_VDUP(q, float, f, 16, 8);
#endif
@ -268,6 +300,7 @@ void exec_vdup_vmov (void)
TEST_VMOV(, uint, u, 64, 1);
TEST_VMOV(, poly, p, 8, 8);
TEST_VMOV(, poly, p, 16, 4);
MFLOAT8_ONLY(TEST_VMOV(, mfloat, mf, 8, 8));
#if defined (FP16_SUPPORTED)
TEST_VMOV(, float, f, 16, 4);
#endif
@ -283,6 +316,7 @@ void exec_vdup_vmov (void)
TEST_VMOV(q, uint, u, 64, 2);
TEST_VMOV(q, poly, p, 8, 16);
TEST_VMOV(q, poly, p, 16, 8);
MFLOAT8_ONLY(TEST_VMOV(q, mfloat, mf, 8, 16));
#if defined (FP16_SUPPORTED)
TEST_VMOV(q, float, f, 16, 8);
#endif

View file

@ -16,6 +16,10 @@ VECT_VAR_DECL(expected,uint,64,1) [] = { 0xfffffffffffffff0 };
VECT_VAR_DECL(expected,poly,8,8) [] = { 0xf7, 0xf7, 0xf7, 0xf7,
0xf7, 0xf7, 0xf7, 0xf7 };
VECT_VAR_DECL(expected,poly,16,4) [] = { 0xfff3, 0xfff3, 0xfff3, 0xfff3 };
#if MFLOAT8_SUPPORTED
VECT_VAR_DECL(expected,hmfloat,8,8) [] = { 0xf6, 0xf6, 0xf6, 0xf6,
0xf6, 0xf6, 0xf6, 0xf6 };
#endif
VECT_VAR_DECL(expected,hfloat,32,2) [] = { 0xc1700000, 0xc1700000 };
#if defined (FP16_SUPPORTED)
VECT_VAR_DECL (expected, hfloat, 16, 4) [] = { 0xca80, 0xca80,
@ -47,6 +51,12 @@ VECT_VAR_DECL(expected,poly,8,16) [] = { 0xf5, 0xf5, 0xf5, 0xf5,
0xf5, 0xf5, 0xf5, 0xf5 };
VECT_VAR_DECL(expected,poly,16,8) [] = { 0xfff1, 0xfff1, 0xfff1, 0xfff1,
0xfff1, 0xfff1, 0xfff1, 0xfff1 };
#if MFLOAT8_SUPPORTED
VECT_VAR_DECL(expected,hmfloat,8,16) [] = { 0xf7, 0xf7, 0xf7, 0xf7,
0xf7, 0xf7, 0xf7, 0xf7,
0xf7, 0xf7, 0xf7, 0xf7,
0xf7, 0xf7, 0xf7, 0xf7 };
#endif
#if defined (FP16_SUPPORTED)
VECT_VAR_DECL (expected, hfloat, 16, 8) [] = { 0xca80, 0xca80,
0xca80, 0xca80,
@ -73,6 +83,7 @@ void exec_vdup_lane (void)
clean_results ();
TEST_MACRO_64BITS_VARIANTS_2_5(VLOAD, vector, buffer);
MFLOAT8_ONLY(VLOAD(vector, buffer, , mfloat, mf, 8, 8);)
#if defined (FP16_SUPPORTED)
VLOAD(vector, buffer, , float, f, 16, 4);
#endif
@ -89,6 +100,7 @@ void exec_vdup_lane (void)
TEST_VDUP_LANE(, uint, u, 64, 1, 1, 0);
TEST_VDUP_LANE(, poly, p, 8, 8, 8, 7);
TEST_VDUP_LANE(, poly, p, 16, 4, 4, 3);
MFLOAT8_ONLY(TEST_VDUP_LANE(, mfloat, mf, 8, 8, 8, 6);)
#if defined (FP16_SUPPORTED)
TEST_VDUP_LANE(, float, f, 16, 4, 4, 3);
#endif
@ -104,6 +116,7 @@ void exec_vdup_lane (void)
TEST_VDUP_LANE(q, uint, u, 64, 2, 1, 0);
TEST_VDUP_LANE(q, poly, p, 8, 16, 8, 5);
TEST_VDUP_LANE(q, poly, p, 16, 8, 4, 1);
MFLOAT8_ONLY(TEST_VDUP_LANE(q, mfloat, mf, 8, 16, 8, 7);)
#if defined (FP16_SUPPORTED)
TEST_VDUP_LANE(q, float, f, 16, 8, 4, 3);
#endif
@ -134,6 +147,10 @@ VECT_VAR_DECL(expected2,uint,64,1) [] = { 0xfffffffffffffff0 };
VECT_VAR_DECL(expected2,poly,8,8) [] = { 0xf7, 0xf7, 0xf7, 0xf7,
0xf7, 0xf7, 0xf7, 0xf7 };
VECT_VAR_DECL(expected2,poly,16,4) [] = { 0xfff3, 0xfff3, 0xfff3, 0xfff3 };
#if MFLOAT8_SUPPORTED
VECT_VAR_DECL(expected2,hmfloat,8,8) [] = { 0xfb, 0xfb, 0xfb, 0xfb,
0xfb, 0xfb, 0xfb, 0xfb };
#endif
VECT_VAR_DECL(expected2,hfloat,32,2) [] = { 0xc1700000, 0xc1700000 };
#if defined (FP16_SUPPORTED)
VECT_VAR_DECL (expected2, hfloat, 16, 4) [] = { 0xca80, 0xca80,
@ -165,6 +182,12 @@ VECT_VAR_DECL(expected2,poly,8,16) [] = { 0xf5, 0xf5, 0xf5, 0xf5,
0xf5, 0xf5, 0xf5, 0xf5 };
VECT_VAR_DECL(expected2,poly,16,8) [] = { 0xfff1, 0xfff1, 0xfff1, 0xfff1,
0xfff1, 0xfff1, 0xfff1, 0xfff1 };
#if MFLOAT8_SUPPORTED
VECT_VAR_DECL(expected2,hmfloat,8,16) [] = { 0xfc, 0xfc, 0xfc, 0xfc,
0xfc, 0xfc, 0xfc, 0xfc,
0xfc, 0xfc, 0xfc, 0xfc,
0xfc, 0xfc, 0xfc, 0xfc };
#endif
#if defined (FP16_SUPPORTED)
VECT_VAR_DECL (expected2, hfloat, 16, 8) [] = { 0xc880, 0xc880,
0xc880, 0xc880,
@ -188,6 +211,7 @@ VECT_VAR_DECL(expected2,hfloat,32,4) [] = { 0xc1700000, 0xc1700000,
clean_results ();
TEST_MACRO_128BITS_VARIANTS_2_5(VLOAD, vector, buffer);
MFLOAT8_ONLY(VLOAD(vector, buffer, q, mfloat, mf, 8, 16);)
#if defined (FP16_SUPPORTED)
VLOAD(vector, buffer, q, float, f, 16, 8);
#endif
@ -204,6 +228,7 @@ VECT_VAR_DECL(expected2,hfloat,32,4) [] = { 0xc1700000, 0xc1700000,
TEST_VDUP_LANEQ(, uint, u, 64, 1, 2, 0);
TEST_VDUP_LANEQ(, poly, p, 8, 8, 16, 7);
TEST_VDUP_LANEQ(, poly, p, 16, 4, 8, 3);
MFLOAT8_ONLY(TEST_VDUP_LANEQ(, mfloat, mf, 8, 8, 16, 11);)
#if defined (FP16_SUPPORTED)
TEST_VDUP_LANEQ(, float, f, 16, 4, 8, 3);
#endif
@ -219,6 +244,7 @@ VECT_VAR_DECL(expected2,hfloat,32,4) [] = { 0xc1700000, 0xc1700000,
TEST_VDUP_LANEQ(q, uint, u, 64, 2, 2, 0);
TEST_VDUP_LANEQ(q, poly, p, 8, 16, 16, 5);
TEST_VDUP_LANEQ(q, poly, p, 16, 8, 8, 1);
MFLOAT8_ONLY(TEST_VDUP_LANEQ(q, mfloat, mf, 8, 16, 16, 12);)
#if defined (FP16_SUPPORTED)
TEST_VDUP_LANEQ(q, float, f, 16, 8, 8, 7);
#endif

View file

@ -16,6 +16,10 @@ VECT_VAR_DECL(expected,uint,64,1) [] = { 0xfffffffffffffff0 };
VECT_VAR_DECL(expected,poly,8,8) [] = { 0xf6, 0xf7, 0x55, 0x55,
0x55, 0x55, 0x55, 0x55 };
VECT_VAR_DECL(expected,poly,16,4) [] = { 0xfff2, 0xfff3, 0x66, 0x66 };
#if MFLOAT8_SUPPORTED
VECT_VAR_DECL(expected,hmfloat,8,8) [] = { 0xf5, 0xf6, 0xf7, 0x77,
0x77, 0x77, 0x77, 0x77 };
#endif
#if defined (FP16_SUPPORTED)
VECT_VAR_DECL (expected, hfloat, 16, 4) [] = { 0xcb00, 0xca80,
0x4b4d, 0x4b4d };
@ -43,6 +47,12 @@ VECT_VAR_DECL(expected,poly,8,16) [] = { 0xfc, 0xfd, 0xfe, 0xff,
0x55, 0x55, 0x55, 0x55 };
VECT_VAR_DECL(expected,poly,16,8) [] = { 0xfff6, 0xfff7, 0x66, 0x66,
0x66, 0x66, 0x66, 0x66 };
#if MFLOAT8_SUPPORTED
VECT_VAR_DECL(expected,hmfloat,8,16) [] = { 0xf9, 0xfa, 0xfb, 0xfc,
0xfd, 0xfe, 0xff, 0xaa,
0xaa, 0xaa, 0xaa, 0xaa,
0xaa, 0xaa, 0xaa, 0xaa };
#endif
#if defined (FP16_SUPPORTED)
VECT_VAR_DECL (expected, hfloat, 16, 8) [] = { 0xc880, 0x4b4d,
0x4b4d, 0x4b4d,
@ -70,6 +80,10 @@ void exec_vext (void)
clean_results ();
TEST_MACRO_ALL_VARIANTS_2_5(VLOAD, vector1, buffer);
#if MFLOAT8_SUPPORTED
VLOAD(vector1, buffer, , mfloat, mf, 8, 8);
VLOAD(vector1, buffer, q, mfloat, mf, 8, 16);
#endif
#ifdef FP16_SUPPORTED
VLOAD(vector1, buffer, , float, f, 16, 4);
VLOAD(vector1, buffer, q, float, f, 16, 8);
@ -88,6 +102,7 @@ void exec_vext (void)
VDUP(vector2, , uint, u, 64, 1, 0x88);
VDUP(vector2, , poly, p, 8, 8, 0x55);
VDUP(vector2, , poly, p, 16, 4, 0x66);
MFLOAT8_ONLY(VDUP(vector2, , mfloat, mf, 8, 8, MFLOAT8(0x77)));
#if defined (FP16_SUPPORTED)
VDUP (vector2, , float, f, 16, 4, 14.6f); /* 14.6f is 0x4b4d. */
#endif
@ -103,6 +118,7 @@ void exec_vext (void)
VDUP(vector2, q, uint, u, 64, 2, 0x88);
VDUP(vector2, q, poly, p, 8, 16, 0x55);
VDUP(vector2, q, poly, p, 16, 8, 0x66);
MFLOAT8_ONLY(VDUP(vector2, q, mfloat, mf, 8, 16, MFLOAT8(0xaa)));
#if defined (FP16_SUPPORTED)
VDUP (vector2, q, float, f, 16, 8, 14.6f);
#endif
@ -119,6 +135,7 @@ void exec_vext (void)
TEST_VEXT(, uint, u, 64, 1, 0);
TEST_VEXT(, poly, p, 8, 8, 6);
TEST_VEXT(, poly, p, 16, 4, 2);
MFLOAT8_ONLY(TEST_VEXT(, mfloat, mf, 8, 8, 5));
#if defined (FP16_SUPPORTED)
TEST_VEXT(, float, f, 16, 4, 2);
#endif
@ -134,6 +151,7 @@ void exec_vext (void)
TEST_VEXT(q, uint, u, 64, 2, 1);
TEST_VEXT(q, poly, p, 8, 16, 12);
TEST_VEXT(q, poly, p, 16, 8, 6);
MFLOAT8_ONLY(TEST_VEXT(q, mfloat, mf, 8, 16, 9));
#if defined (FP16_SUPPORTED)
TEST_VEXT(q, float, f, 16, 8, 7);
#endif

View file

@ -16,6 +16,10 @@ VECT_VAR_DECL(expected,uint,64,1) [] = { 0xfffffffffffffff1 };
VECT_VAR_DECL(expected,poly,8,8) [] = { 0xf8, 0xf9, 0xfa, 0xfb,
0xfc, 0xfd, 0xfe, 0xff };
VECT_VAR_DECL(expected,poly,16,4) [] = { 0xfff4, 0xfff5, 0xfff6, 0xfff7 };
#if MFLOAT8_SUPPORTED
VECT_VAR_DECL(expected,hmfloat,8,8) [] = { 0xf8, 0xf9, 0xfa, 0xfb,
0xfc, 0xfd, 0xfe, 0xff };
#endif
VECT_VAR_DECL(expected,hfloat,16,4) [] = { 0xca00, 0xc980, 0xc900, 0xc880 };
VECT_VAR_DECL(expected,hfloat,32,2) [] = { 0xc1600000, 0xc1500000 };
@ -32,6 +36,7 @@ void exec_vget_high (void)
DECL_VARIABLE_128BITS_VARIANTS(vector128);
TEST_MACRO_128BITS_VARIANTS_2_5(VLOAD, vector128, buffer);
MFLOAT8_ONLY(VLOAD(vector128, buffer, q, mfloat, mf, 8, 16);)
#if defined (__ARM_FP16_FORMAT_IEEE) || defined (__ARM_FP16_FORMAT_ALTERNATIVE)
VLOAD(vector128, buffer, q, float, f, 16, 8);
#endif

View file

@ -16,6 +16,10 @@ VECT_VAR_DECL(expected,uint,64,1) [] = { 0xfffffffffffffff0 };
VECT_VAR_DECL(expected,poly,8,8) [] = { 0xf0, 0xf1, 0xf2, 0xf3,
0xf4, 0xf5, 0xf6, 0xf7 };
VECT_VAR_DECL(expected,poly,16,4) [] = { 0xfff0, 0xfff1, 0xfff2, 0xfff3 };
#if MFLOAT8_SUPPORTED
VECT_VAR_DECL(expected,hmfloat,8,8) [] = { 0xf0, 0xf1, 0xf2, 0xf3,
0xf4, 0xf5, 0xf6, 0xf7 };
#endif
VECT_VAR_DECL(expected,hfloat,16,4) [] = { 0xcc00, 0xcb80, 0xcb00, 0xca80 };
VECT_VAR_DECL(expected,hfloat,32,2) [] = { 0xc1800000, 0xc1700000 };
VECT_VAR_DECL(expected,int,8,16) [] = { 0xf0, 0xf1, 0xf2, 0xf3,
@ -45,6 +49,12 @@ VECT_VAR_DECL(expected,poly,8,16) [] = { 0xf0, 0xf1, 0xf2, 0xf3,
0xfc, 0xfd, 0xfe, 0xff };
VECT_VAR_DECL(expected,poly,16,8) [] = { 0xfff0, 0xfff1, 0xfff2, 0xfff3,
0xfff4, 0xfff5, 0xfff6, 0xfff7 };
#if MFLOAT8_SUPPORTED
VECT_VAR_DECL(expected,hmfloat,8,16) [] = { 0xf0, 0xf1, 0xf2, 0xf3,
0xf4, 0xf5, 0xf6, 0xf7,
0xf8, 0xf9, 0xfa, 0xfb,
0xfc, 0xfd, 0xfe, 0xff };
#endif
VECT_VAR_DECL(expected,hfloat,16,8) [] = { 0xcc00, 0xcb80, 0xcb00, 0xca80,
0xca00, 0xc980, 0xc900, 0xc880 };
VECT_VAR_DECL(expected,hfloat,32,4) [] = { 0xc1800000, 0xc1700000,
@ -65,6 +75,10 @@ void exec_vld1 (void)
TEST_MACRO_ALL_VARIANTS_2_5(TEST_VLD1, vector, buffer);
#if MFLOAT8_SUPPORTED
TEST_VLD1(vector, buffer, , mfloat, mf, 8, 8);
TEST_VLD1(vector, buffer, q, mfloat, mf, 8, 16);
#endif
#if defined (__ARM_FP16_FORMAT_IEEE) || defined (__ARM_FP16_FORMAT_ALTERNATIVE)
TEST_VLD1(vector, buffer, , float, f, 16, 4);
TEST_VLD1(vector, buffer, q, float, f, 16, 8);

View file

@ -17,6 +17,10 @@ VECT_VAR_DECL(expected0,uint,64,1) [] = { 0xfffffffffffffff0 };
VECT_VAR_DECL(expected0,poly,8,8) [] = { 0xf0, 0xf0, 0xf0, 0xf0,
0xf0, 0xf0, 0xf0, 0xf0 };
VECT_VAR_DECL(expected0,poly,16,4) [] = { 0xfff0, 0xfff0, 0xfff0, 0xfff0 };
#if MFLOAT8_SUPPORTED
VECT_VAR_DECL(expected0,hmfloat,8,8) [] = { 0xf0, 0xf0, 0xf0, 0xf0,
0xf0, 0xf0, 0xf0, 0xf0 };
#endif
VECT_VAR_DECL(expected0,hfloat,16,4) [] = { 0xcc00, 0xcc00, 0xcc00, 0xcc00 };
VECT_VAR_DECL(expected0,hfloat,32,2) [] = { 0xc1800000, 0xc1800000 };
VECT_VAR_DECL(expected0,int,8,16) [] = { 0xf0, 0xf0, 0xf0, 0xf0,
@ -45,6 +49,12 @@ VECT_VAR_DECL(expected0,poly,8,16) [] = { 0xf0, 0xf0, 0xf0, 0xf0,
0xf0, 0xf0, 0xf0, 0xf0 };
VECT_VAR_DECL(expected0,poly,16,8) [] = { 0xfff0, 0xfff0, 0xfff0, 0xfff0,
0xfff0, 0xfff0, 0xfff0, 0xfff0 };
#if MFLOAT8_SUPPORTED
VECT_VAR_DECL(expected0,hmfloat,8,16) [] = { 0xf0, 0xf0, 0xf0, 0xf0,
0xf0, 0xf0, 0xf0, 0xf0,
0xf0, 0xf0, 0xf0, 0xf0,
0xf0, 0xf0, 0xf0, 0xf0 };
#endif
VECT_VAR_DECL(expected0,hfloat,16,8) [] = { 0xcc00, 0xcc00, 0xcc00, 0xcc00,
0xcc00, 0xcc00, 0xcc00, 0xcc00 };
VECT_VAR_DECL(expected0,hfloat,32,4) [] = { 0xc1800000, 0xc1800000,
@ -64,6 +74,10 @@ VECT_VAR_DECL(expected1,uint,64,1) [] = { 0xfffffffffffffff1 };
VECT_VAR_DECL(expected1,poly,8,8) [] = { 0xf1, 0xf1, 0xf1, 0xf1,
0xf1, 0xf1, 0xf1, 0xf1 };
VECT_VAR_DECL(expected1,poly,16,4) [] = { 0xfff1, 0xfff1, 0xfff1, 0xfff1 };
#if MFLOAT8_SUPPORTED
VECT_VAR_DECL(expected1,hmfloat,8,8) [] = { 0xf1, 0xf1, 0xf1, 0xf1,
0xf1, 0xf1, 0xf1, 0xf1 };
#endif
VECT_VAR_DECL(expected1,hfloat,16,4) [] = { 0xcb80, 0xcb80, 0xcb80, 0xcb80 };
VECT_VAR_DECL(expected1,hfloat,32,2) [] = { 0xc1700000, 0xc1700000 };
VECT_VAR_DECL(expected1,int,8,16) [] = { 0xf1, 0xf1, 0xf1, 0xf1,
@ -92,6 +106,12 @@ VECT_VAR_DECL(expected1,poly,8,16) [] = { 0xf1, 0xf1, 0xf1, 0xf1,
0xf1, 0xf1, 0xf1, 0xf1 };
VECT_VAR_DECL(expected1,poly,16,8) [] = { 0xfff1, 0xfff1, 0xfff1, 0xfff1,
0xfff1, 0xfff1, 0xfff1, 0xfff1 };
#if MFLOAT8_SUPPORTED
VECT_VAR_DECL(expected1,hmfloat,8,16) [] = { 0xf1, 0xf1, 0xf1, 0xf1,
0xf1, 0xf1, 0xf1, 0xf1,
0xf1, 0xf1, 0xf1, 0xf1,
0xf1, 0xf1, 0xf1, 0xf1 };
#endif
VECT_VAR_DECL(expected1,hfloat,16,8) [] = { 0xcb80, 0xcb80, 0xcb80, 0xcb80,
0xcb80, 0xcb80, 0xcb80, 0xcb80 };
VECT_VAR_DECL(expected1,hfloat,32,4) [] = { 0xc1700000, 0xc1700000,
@ -111,6 +131,10 @@ VECT_VAR_DECL(expected2,uint,64,1) [] = { 0xfffffffffffffff2 };
VECT_VAR_DECL(expected2,poly,8,8) [] = { 0xf2, 0xf2, 0xf2, 0xf2,
0xf2, 0xf2, 0xf2, 0xf2 };
VECT_VAR_DECL(expected2,poly,16,4) [] = { 0xfff2, 0xfff2, 0xfff2, 0xfff2 };
#if MFLOAT8_SUPPORTED
VECT_VAR_DECL(expected2,hmfloat,8,8) [] = { 0xf2, 0xf2, 0xf2, 0xf2,
0xf2, 0xf2, 0xf2, 0xf2 };
#endif
VECT_VAR_DECL(expected2,hfloat,16,4) [] = { 0xcb00, 0xcb00, 0xcb00, 0xcb00 };
VECT_VAR_DECL(expected2,hfloat,32,2) [] = { 0xc1600000, 0xc1600000 };
VECT_VAR_DECL(expected2,int,8,16) [] = { 0xf2, 0xf2, 0xf2, 0xf2,
@ -139,6 +163,12 @@ VECT_VAR_DECL(expected2,poly,8,16) [] = { 0xf2, 0xf2, 0xf2, 0xf2,
0xf2, 0xf2, 0xf2, 0xf2 };
VECT_VAR_DECL(expected2,poly,16,8) [] = { 0xfff2, 0xfff2, 0xfff2, 0xfff2,
0xfff2, 0xfff2, 0xfff2, 0xfff2 };
#if MFLOAT8_SUPPORTED
VECT_VAR_DECL(expected2,hmfloat,8,16) [] = { 0xf2, 0xf2, 0xf2, 0xf2,
0xf2, 0xf2, 0xf2, 0xf2,
0xf2, 0xf2, 0xf2, 0xf2,
0xf2, 0xf2, 0xf2, 0xf2 };
#endif
VECT_VAR_DECL(expected2,hfloat,16,8) [] = { 0xcb00, 0xcb00, 0xcb00, 0xcb00,
0xcb00, 0xcb00, 0xcb00, 0xcb00 };
VECT_VAR_DECL(expected2,hfloat,32,4) [] = { 0xc1600000, 0xc1600000,
@ -163,6 +193,10 @@ void exec_vld1_dup (void)
TEST_MACRO_ALL_VARIANTS_2_5(TEST_VLD1_DUP, vector, buffer_dup);
#if MFLOAT8_SUPPORTED
TEST_VLD1_DUP(vector, buffer_dup, , mfloat, mf, 8, 8);
TEST_VLD1_DUP(vector, buffer_dup, q, mfloat, mf, 8, 16);
#endif
#if defined (__ARM_FP16_FORMAT_IEEE) || defined (__ARM_FP16_FORMAT_ALTERNATIVE)
TEST_VLD1_DUP(vector, buffer_dup, , float, f, 16, 4);
TEST_VLD1_DUP(vector, buffer_dup, q, float, f, 16, 8);

View file

@ -16,6 +16,10 @@ VECT_VAR_DECL(expected,uint,64,1) [] = { 0xfffffffffffffff0 };
VECT_VAR_DECL(expected,poly,8,8) [] = { 0xaa, 0xaa, 0xaa, 0xaa,
0xaa, 0xaa, 0xaa, 0xf0 };
VECT_VAR_DECL(expected,poly,16,4) [] = { 0xaaaa, 0xaaaa, 0xaaaa, 0xfff0 };
#if MFLOAT8_SUPPORTED
VECT_VAR_DECL(expected,hmfloat,8,8) [] = { 0xaa, 0xaa, 0xaa, 0xaa,
0xaa, 0xf0, 0xaa, 0xaa };
#endif
VECT_VAR_DECL(expected,hfloat,16,4) [] = { 0xaaaa, 0xaaaa, 0xcc00, 0xaaaa };
VECT_VAR_DECL(expected,hfloat,32,2) [] = { 0xaaaaaaaa, 0xc1800000 };
VECT_VAR_DECL(expected,int,8,16) [] = { 0xaa, 0xaa, 0xaa, 0xaa,
@ -44,6 +48,12 @@ VECT_VAR_DECL(expected,poly,8,16) [] = { 0xaa, 0xaa, 0xaa, 0xaa,
0xf0, 0xaa, 0xaa, 0xaa };
VECT_VAR_DECL(expected,poly,16,8) [] = { 0xaaaa, 0xaaaa, 0xaaaa, 0xaaaa,
0xaaaa, 0xaaaa, 0xfff0, 0xaaaa };
#if MFLOAT8_SUPPORTED
VECT_VAR_DECL(expected,hmfloat,8,16) [] = { 0xaa, 0xaa, 0xaa, 0xaa,
0xaa, 0xaa, 0xaa, 0xaa,
0xaa, 0xaa, 0xaa, 0xf0,
0xaa, 0xaa, 0xaa, 0xaa };
#endif
VECT_VAR_DECL(expected,hfloat,16,8) [] = { 0xaaaa, 0xaaaa, 0xaaaa, 0xaaaa,
0xaaaa, 0xcc00, 0xaaaa, 0xaaaa };
VECT_VAR_DECL(expected,hfloat,32,4) [] = { 0xaaaaaaaa, 0xaaaaaaaa,
@ -75,6 +85,7 @@ void exec_vld1_lane (void)
ARRAY(buffer_src, uint, 64, 1);
ARRAY(buffer_src, poly, 8, 8);
ARRAY(buffer_src, poly, 16, 4);
MFLOAT8_ONLY(ARRAY(buffer_src, mfloat, 8, 8));
#if defined (__ARM_FP16_FORMAT_IEEE) || defined (__ARM_FP16_FORMAT_ALTERNATIVE)
ARRAY(buffer_src, float, 16, 4);
#endif
@ -90,6 +101,7 @@ void exec_vld1_lane (void)
ARRAY(buffer_src, uint, 64, 2);
ARRAY(buffer_src, poly, 8, 16);
ARRAY(buffer_src, poly, 16, 8);
MFLOAT8_ONLY(ARRAY(buffer_src, mfloat, 8, 16));
#if defined (__ARM_FP16_FORMAT_IEEE) || defined (__ARM_FP16_FORMAT_ALTERNATIVE)
ARRAY(buffer_src, float, 16, 8);
#endif
@ -108,6 +120,7 @@ void exec_vld1_lane (void)
TEST_VLD1_LANE(, uint, u, 64, 1, 0);
TEST_VLD1_LANE(, poly, p, 8, 8, 7);
TEST_VLD1_LANE(, poly, p, 16, 4, 3);
MFLOAT8_ONLY(TEST_VLD1_LANE(, mfloat, mf, 8, 8, 5));
#if defined (__ARM_FP16_FORMAT_IEEE) || defined (__ARM_FP16_FORMAT_ALTERNATIVE)
TEST_VLD1_LANE(, float, f, 16, 4, 2);
#endif
@ -123,6 +136,7 @@ void exec_vld1_lane (void)
TEST_VLD1_LANE(q, uint, u, 64, 2, 0);
TEST_VLD1_LANE(q, poly, p, 8, 16, 12);
TEST_VLD1_LANE(q, poly, p, 16, 8, 6);
MFLOAT8_ONLY(TEST_VLD1_LANE(q, mfloat, mf, 8, 16, 11));
#if defined (__ARM_FP16_FORMAT_IEEE) || defined (__ARM_FP16_FORMAT_ALTERNATIVE)
TEST_VLD1_LANE(q, float, f, 16, 8, 5);
#endif

View file

@ -4,6 +4,7 @@
/* { dg-options "-O3" } */
#include <arm_neon.h>
#include "arm-neon-ref.h"
extern void abort (void);
@ -16,14 +17,14 @@ test_vld##SUFFIX##_x2 () \
BASE##x##ELTS##x##2##_t vectors; \
int i,j; \
for (i = 0; i < ELTS * 2; i++) \
data [i] = (BASE##_t) 2*i + 1; \
data [i] = CONVERT (BASE##_t, 2*i + 1); \
asm volatile ("" : : : "memory"); \
vectors = vld1##SUFFIX##_x2 (data); \
vst1##SUFFIX (temp, vectors.val[0]); \
vst1##SUFFIX (&temp[ELTS], vectors.val[1]); \
asm volatile ("" : : : "memory"); \
for (j = 0; j < ELTS * 2; j++) \
if (temp[j] != data[j]) \
if (!BITEQUAL (temp[j], data[j])) \
return 1; \
return 0; \
}
@ -56,6 +57,8 @@ VARIANT (float32, 4, q_f32)
#ifdef __aarch64__
#define VARIANTS(VARIANT) VARIANTS_1(VARIANT) \
VARIANT (mfloat8, 8, _mf8) \
VARIANT (mfloat8, 16, q_mf8) \
VARIANT (float64, 1, _f64) \
VARIANT (float64, 2, q_f64)
#else
@ -65,14 +68,14 @@ VARIANT (float64, 2, q_f64)
/* Tests of vld1_x2 and vld1q_x2. */
VARIANTS (TESTMETH)
#define CHECK(BASE, ELTS, SUFFIX) \
#define CHECKS(BASE, ELTS, SUFFIX) \
if (test_vld##SUFFIX##_x2 () != 0) \
abort ();
int
main (int argc, char **argv)
{
VARIANTS (CHECK)
VARIANTS (CHECKS)
return 0;
}

View file

@ -17,7 +17,7 @@ test_vld##SUFFIX##_x3 () \
BASE##x##ELTS##x##3##_t vectors; \
int i,j; \
for (i = 0; i < ELTS * 3; i++) \
data [i] = (BASE##_t) 3*i; \
data [i] = CONVERT (BASE##_t, 3*i); \
asm volatile ("" : : : "memory"); \
vectors = vld1##SUFFIX##_x3 (data); \
vst1##SUFFIX (temp, vectors.val[0]); \
@ -25,7 +25,7 @@ test_vld##SUFFIX##_x3 () \
vst1##SUFFIX (&temp[ELTS * 2], vectors.val[2]); \
asm volatile ("" : : : "memory"); \
for (j = 0; j < ELTS * 3; j++) \
if (temp[j] != data[j]) \
if (!BITEQUAL (temp[j], data[j])) \
return 1; \
return 0; \
}
@ -58,6 +58,8 @@ VARIANT (float32, 4, q_f32)
#ifdef __aarch64__
#define VARIANTS(VARIANT) VARIANTS_1(VARIANT) \
VARIANT (mfloat8, 8, _mf8) \
VARIANT (mfloat8, 16, q_mf8) \
VARIANT (float64, 1, _f64) \
VARIANT (float64, 2, q_f64)
#else
@ -70,7 +72,7 @@ VARIANTS (TESTMETH)
#define CHECKS(BASE, ELTS, SUFFIX) \
if (test_vld##SUFFIX##_x3 () != 0) \
fprintf (stderr, "test_vld1##SUFFIX##_x3");
fprintf (stderr, "test_vld1##SUFFIX##_x3"), abort ();
int
main (int argc, char **argv)

View file

@ -18,7 +18,7 @@ test_vld1##SUFFIX##_x4 () \
BASE##x##ELTS##x##4##_t vectors; \
int i,j; \
for (i = 0; i < ELTS * 4; i++) \
data [i] = (BASE##_t) 4*i; \
data [i] = CONVERT (BASE##_t, 4*i); \
asm volatile ("" : : : "memory"); \
vectors = vld1##SUFFIX##_x4 (data); \
vst1##SUFFIX (temp, vectors.val[0]); \
@ -27,7 +27,7 @@ test_vld1##SUFFIX##_x4 () \
vst1##SUFFIX (&temp[ELTS * 3], vectors.val[3]); \
asm volatile ("" : : : "memory"); \
for (j = 0; j < ELTS * 4; j++) \
if (temp[j] != data[j]) \
if (!BITEQUAL (temp[j], data[j])) \
return 1; \
return 0; \
}
@ -62,6 +62,8 @@ VARIANT (float32, 4, q_f32)
#ifdef __aarch64__
#define VARIANTS(VARIANT) VARIANTS_1(VARIANT) \
VARIANT (mfloat8, 8, _mf8) \
VARIANT (mfloat8, 16, q_mf8) \
VARIANT (float64, 1, _f64) \
VARIANT (float64, 2, q_f64)
#else

View file

@ -18,6 +18,10 @@ VECT_VAR_DECL(expected_vld2_0,uint,64,1) [] = { 0xfffffffffffffff0 };
VECT_VAR_DECL(expected_vld2_0,poly,8,8) [] = { 0xf0, 0xf1, 0xf2, 0xf3,
0xf4, 0xf5, 0xf6, 0xf7 };
VECT_VAR_DECL(expected_vld2_0,poly,16,4) [] = { 0xfff0, 0xfff1, 0xfff2, 0xfff3 };
#if MFLOAT8_SUPPORTED
VECT_VAR_DECL(expected_vld2_0,hmfloat,8,8) [] = { 0xf0, 0xf1, 0xf2, 0xf3,
0xf4, 0xf5, 0xf6, 0xf7 };
#endif
VECT_VAR_DECL(expected_vld2_0,hfloat,16,4) [] = { 0xcc00, 0xcb80, 0xcb00, 0xca80 };
VECT_VAR_DECL(expected_vld2_0,hfloat,32,2) [] = { 0xc1800000, 0xc1700000 };
VECT_VAR_DECL(expected_vld2_0,int,8,16) [] = { 0xf0, 0xf1, 0xf2, 0xf3,
@ -42,6 +46,12 @@ VECT_VAR_DECL(expected_vld2_0,poly,8,16) [] = { 0xf0, 0xf1, 0xf2, 0xf3,
0xfc, 0xfd, 0xfe, 0xff };
VECT_VAR_DECL(expected_vld2_0,poly,16,8) [] = { 0xfff0, 0xfff1, 0xfff2, 0xfff3,
0xfff4, 0xfff5, 0xfff6, 0xfff7 };
#if MFLOAT8_SUPPORTED
VECT_VAR_DECL(expected_vld2_0,hmfloat,8,16) [] = { 0xf0, 0xf1, 0xf2, 0xf3,
0xf4, 0xf5, 0xf6, 0xf7,
0xf8, 0xf9, 0xfa, 0xfb,
0xfc, 0xfd, 0xfe, 0xff };
#endif
VECT_VAR_DECL(expected_vld2_0,hfloat,16,8) [] = { 0xcc00, 0xcb80, 0xcb00, 0xca80,
0xca00, 0xc980, 0xc900, 0xc880 };
VECT_VAR_DECL(expected_vld2_0,hfloat,32,4) [] = { 0xc1800000, 0xc1700000,
@ -61,6 +71,10 @@ VECT_VAR_DECL(expected_vld2_1,uint,64,1) [] = { 0xfffffffffffffff1 };
VECT_VAR_DECL(expected_vld2_1,poly,8,8) [] = { 0xf8, 0xf9, 0xfa, 0xfb,
0xfc, 0xfd, 0xfe, 0xff };
VECT_VAR_DECL(expected_vld2_1,poly,16,4) [] = { 0xfff4, 0xfff5, 0xfff6, 0xfff7 };
#if MFLOAT8_SUPPORTED
VECT_VAR_DECL(expected_vld2_1,hmfloat,8,8) [] = { 0xf8, 0xf9, 0xfa, 0xfb,
0xfc, 0xfd, 0xfe, 0xff };
#endif
VECT_VAR_DECL(expected_vld2_1,hfloat,16,4) [] = { 0xca00, 0xc980, 0xc900, 0xc880 };
VECT_VAR_DECL(expected_vld2_1,hfloat,32,2) [] = { 0xc1600000, 0xc1500000 };
VECT_VAR_DECL(expected_vld2_1,int,8,16) [] = { 0x0, 0x1, 0x2, 0x3,
@ -85,6 +99,12 @@ VECT_VAR_DECL(expected_vld2_1,poly,8,16) [] = { 0x0, 0x1, 0x2, 0x3,
0xc, 0xd, 0xe, 0xf };
VECT_VAR_DECL(expected_vld2_1,poly,16,8) [] = { 0xfff8, 0xfff9, 0xfffa, 0xfffb,
0xfffc, 0xfffd, 0xfffe, 0xffff };
#if MFLOAT8_SUPPORTED
VECT_VAR_DECL(expected_vld2_1,hmfloat,8,16) [] = { 0x0, 0x1, 0x2, 0x3,
0x4, 0x5, 0x6, 0x7,
0x8, 0x9, 0xa, 0xb,
0xc, 0xd, 0xe, 0xf };
#endif
VECT_VAR_DECL(expected_vld2_1,hfloat,16,8) [] = { 0xc800, 0xc700, 0xc600, 0xc500,
0xc400, 0xc200, 0xc000, 0xbc00 };
VECT_VAR_DECL(expected_vld2_1,hfloat,32,4) [] = { 0xc1400000, 0xc1300000,
@ -104,6 +124,10 @@ VECT_VAR_DECL(expected_vld3_0,uint,64,1) [] = { 0xfffffffffffffff0 };
VECT_VAR_DECL(expected_vld3_0,poly,8,8) [] = { 0xf0, 0xf1, 0xf2, 0xf3,
0xf4, 0xf5, 0xf6, 0xf7 };
VECT_VAR_DECL(expected_vld3_0,poly,16,4) [] = { 0xfff0, 0xfff1, 0xfff2, 0xfff3 };
#if MFLOAT8_SUPPORTED
VECT_VAR_DECL(expected_vld3_0,hmfloat,8,8) [] = { 0xf0, 0xf1, 0xf2, 0xf3,
0xf4, 0xf5, 0xf6, 0xf7 };
#endif
VECT_VAR_DECL(expected_vld3_0,hfloat,16,4) [] = { 0xcc00, 0xcb80, 0xcb00, 0xca80 };
VECT_VAR_DECL(expected_vld3_0,hfloat,32,2) [] = { 0xc1800000, 0xc1700000 };
VECT_VAR_DECL(expected_vld3_0,int,8,16) [] = { 0xf0, 0xf1, 0xf2, 0xf3,
@ -128,6 +152,12 @@ VECT_VAR_DECL(expected_vld3_0,poly,8,16) [] = { 0xf0, 0xf1, 0xf2, 0xf3,
0xfc, 0xfd, 0xfe, 0xff };
VECT_VAR_DECL(expected_vld3_0,poly,16,8) [] = { 0xfff0, 0xfff1, 0xfff2, 0xfff3,
0xfff4, 0xfff5, 0xfff6, 0xfff7 };
#if MFLOAT8_SUPPORTED
VECT_VAR_DECL(expected_vld3_0,hmfloat,8,16) [] = { 0xf0, 0xf1, 0xf2, 0xf3,
0xf4, 0xf5, 0xf6, 0xf7,
0xf8, 0xf9, 0xfa, 0xfb,
0xfc, 0xfd, 0xfe, 0xff };
#endif
VECT_VAR_DECL(expected_vld3_0,hfloat,16,8) [] = { 0xcc00, 0xcb80, 0xcb00, 0xca80,
0xca00, 0xc980, 0xc900, 0xc880 };
VECT_VAR_DECL(expected_vld3_0,hfloat,32,4) [] = { 0xc1800000, 0xc1700000,
@ -147,6 +177,10 @@ VECT_VAR_DECL(expected_vld3_1,uint,64,1) [] = { 0xfffffffffffffff1 };
VECT_VAR_DECL(expected_vld3_1,poly,8,8) [] = { 0xf8, 0xf9, 0xfa, 0xfb,
0xfc, 0xfd, 0xfe, 0xff };
VECT_VAR_DECL(expected_vld3_1,poly,16,4) [] = { 0xfff4, 0xfff5, 0xfff6, 0xfff7 };
#if MFLOAT8_SUPPORTED
VECT_VAR_DECL(expected_vld3_1,hmfloat,8,8) [] = { 0xf8, 0xf9, 0xfa, 0xfb,
0xfc, 0xfd, 0xfe, 0xff };
#endif
VECT_VAR_DECL(expected_vld3_1,hfloat,16,4) [] = { 0xca00, 0xc980, 0xc900, 0xc880 };
VECT_VAR_DECL(expected_vld3_1,hfloat,32,2) [] = { 0xc1600000, 0xc1500000 };
VECT_VAR_DECL(expected_vld3_1,int,8,16) [] = { 0x0, 0x1, 0x2, 0x3,
@ -171,6 +205,12 @@ VECT_VAR_DECL(expected_vld3_1,poly,8,16) [] = { 0x0, 0x1, 0x2, 0x3,
0xc, 0xd, 0xe, 0xf };
VECT_VAR_DECL(expected_vld3_1,poly,16,8) [] = { 0xfff8, 0xfff9, 0xfffa, 0xfffb,
0xfffc, 0xfffd, 0xfffe, 0xffff };
#if MFLOAT8_SUPPORTED
VECT_VAR_DECL(expected_vld3_1,hmfloat,8,16) [] = { 0x0, 0x1, 0x2, 0x3,
0x4, 0x5, 0x6, 0x7,
0x8, 0x9, 0xa, 0xb,
0xc, 0xd, 0xe, 0xf };
#endif
VECT_VAR_DECL(expected_vld3_1,hfloat,16,8) [] = { 0xc800, 0xc700, 0xc600, 0xc500,
0xc400, 0xc200, 0xc000, 0xbc00 };
VECT_VAR_DECL(expected_vld3_1,hfloat,32,4) [] = { 0xc1400000, 0xc1300000,
@ -193,6 +233,10 @@ VECT_VAR_DECL(expected_vld3_2,poly,8,8) [] = { 0x0, 0x1, 0x2, 0x3,
0x4, 0x5, 0x6, 0x7 };
VECT_VAR_DECL(expected_vld3_2,poly,16,4) [] = { 0xfff8, 0xfff9,
0xfffa, 0xfffb };
#if MFLOAT8_SUPPORTED
VECT_VAR_DECL(expected_vld3_2,hmfloat,8,8) [] = { 0x0, 0x1, 0x2, 0x3,
0x4, 0x5, 0x6, 0x7 };
#endif
VECT_VAR_DECL(expected_vld3_2,hfloat,16,4) [] = { 0xc800, 0xc700, 0xc600, 0xc500 };
VECT_VAR_DECL(expected_vld3_2,hfloat,32,2) [] = { 0xc1400000, 0xc1300000 };
VECT_VAR_DECL(expected_vld3_2,int,8,16) [] = { 0x10, 0x11, 0x12, 0x13,
@ -217,6 +261,12 @@ VECT_VAR_DECL(expected_vld3_2,poly,8,16) [] = { 0x10, 0x11, 0x12, 0x13,
0x1c, 0x1d, 0x1e, 0x1f };
VECT_VAR_DECL(expected_vld3_2,poly,16,8) [] = { 0x0, 0x1, 0x2, 0x3,
0x4, 0x5, 0x6, 0x7 };
#if MFLOAT8_SUPPORTED
VECT_VAR_DECL(expected_vld3_2,hmfloat,8,16) [] = { 0x10, 0x11, 0x12, 0x13,
0x14, 0x15, 0x16, 0x17,
0x18, 0x19, 0x1a, 0x1b,
0x1c, 0x1d, 0x1e, 0x1f };
#endif
VECT_VAR_DECL(expected_vld3_2,hfloat,16,8) [] = { 0x0000, 0x3c00, 0x4000, 0x4200,
0x4400, 0x4500, 0x4600, 0x4700 };
VECT_VAR_DECL(expected_vld3_2,hfloat,32,4) [] = { 0xc1000000, 0xc0e00000,
@ -237,6 +287,10 @@ VECT_VAR_DECL(expected_vld4_0,uint,32,2) [] = { 0xfffffff0, 0xfffffff1 };
VECT_VAR_DECL(expected_vld4_0,uint,64,1) [] = { 0xfffffffffffffff0 };
VECT_VAR_DECL(expected_vld4_0,poly,8,8) [] = { 0xf0, 0xf1, 0xf2, 0xf3,
0xf4, 0xf5, 0xf6, 0xf7 };
#if MFLOAT8_SUPPORTED
VECT_VAR_DECL(expected_vld4_0,hmfloat,8,8) [] = { 0xf0, 0xf1, 0xf2, 0xf3,
0xf4, 0xf5, 0xf6, 0xf7 };
#endif
VECT_VAR_DECL(expected_vld4_0,poly,16,4) [] = { 0xfff0, 0xfff1, 0xfff2, 0xfff3 };
VECT_VAR_DECL(expected_vld4_0,hfloat,16,4) [] = { 0xcc00, 0xcb80, 0xcb00, 0xca80 };
VECT_VAR_DECL(expected_vld4_0,hfloat,32,2) [] = { 0xc1800000, 0xc1700000 };
@ -262,6 +316,12 @@ VECT_VAR_DECL(expected_vld4_0,poly,8,16) [] = { 0xf0, 0xf1, 0xf2, 0xf3,
0xfc, 0xfd, 0xfe, 0xff };
VECT_VAR_DECL(expected_vld4_0,poly,16,8) [] = { 0xfff0, 0xfff1, 0xfff2, 0xfff3,
0xfff4, 0xfff5, 0xfff6, 0xfff7 };
#if MFLOAT8_SUPPORTED
VECT_VAR_DECL(expected_vld4_0,hmfloat,8,16) [] = { 0xf0, 0xf1, 0xf2, 0xf3,
0xf4, 0xf5, 0xf6, 0xf7,
0xf8, 0xf9, 0xfa, 0xfb,
0xfc, 0xfd, 0xfe, 0xff };
#endif
VECT_VAR_DECL(expected_vld4_0,hfloat,16,8) [] = { 0xcc00, 0xcb80, 0xcb00, 0xca80,
0xca00, 0xc980, 0xc900, 0xc880 };
VECT_VAR_DECL(expected_vld4_0,hfloat,32,4) [] = { 0xc1800000, 0xc1700000,
@ -281,6 +341,10 @@ VECT_VAR_DECL(expected_vld4_1,uint,64,1) [] = { 0xfffffffffffffff1 };
VECT_VAR_DECL(expected_vld4_1,poly,8,8) [] = { 0xf8, 0xf9, 0xfa, 0xfb,
0xfc, 0xfd, 0xfe, 0xff };
VECT_VAR_DECL(expected_vld4_1,poly,16,4) [] = { 0xfff4, 0xfff5, 0xfff6, 0xfff7 };
#if MFLOAT8_SUPPORTED
VECT_VAR_DECL(expected_vld4_1,hmfloat,8,8) [] = { 0xf8, 0xf9, 0xfa, 0xfb,
0xfc, 0xfd, 0xfe, 0xff };
#endif
VECT_VAR_DECL(expected_vld4_1,hfloat,16,4) [] = { 0xca00, 0xc980, 0xc900, 0xc880 };
VECT_VAR_DECL(expected_vld4_1,hfloat,32,2) [] = { 0xc1600000, 0xc1500000 };
VECT_VAR_DECL(expected_vld4_1,int,8,16) [] = { 0x0, 0x1, 0x2, 0x3,
@ -305,6 +369,12 @@ VECT_VAR_DECL(expected_vld4_1,poly,8,16) [] = { 0x0, 0x1, 0x2, 0x3,
0xc, 0xd, 0xe, 0xf };
VECT_VAR_DECL(expected_vld4_1,poly,16,8) [] = { 0xfff8, 0xfff9, 0xfffa, 0xfffb,
0xfffc, 0xfffd, 0xfffe, 0xffff };
#if MFLOAT8_SUPPORTED
VECT_VAR_DECL(expected_vld4_1,hmfloat,8,16) [] = { 0x0, 0x1, 0x2, 0x3,
0x4, 0x5, 0x6, 0x7,
0x8, 0x9, 0xa, 0xb,
0xc, 0xd, 0xe, 0xf };
#endif
VECT_VAR_DECL(expected_vld4_1,hfloat,16,8) [] = { 0xc800, 0xc700, 0xc600, 0xc500,
0xc400, 0xc200, 0xc000, 0xbc00 };
VECT_VAR_DECL(expected_vld4_1,hfloat,32,4) [] = { 0xc1400000, 0xc1300000,
@ -324,6 +394,10 @@ VECT_VAR_DECL(expected_vld4_2,uint,64,1) [] = { 0xfffffffffffffff2 };
VECT_VAR_DECL(expected_vld4_2,poly,8,8) [] = { 0x0, 0x1, 0x2, 0x3,
0x4, 0x5, 0x6, 0x7 };
VECT_VAR_DECL(expected_vld4_2,poly,16,4) [] = { 0xfff8, 0xfff9, 0xfffa, 0xfffb };
#if MFLOAT8_SUPPORTED
VECT_VAR_DECL(expected_vld4_2,hmfloat,8,8) [] = { 0x0, 0x1, 0x2, 0x3,
0x4, 0x5, 0x6, 0x7 };
#endif
VECT_VAR_DECL(expected_vld4_2,hfloat,16,4) [] = { 0xc800, 0xc700, 0xc600, 0xc500 };
VECT_VAR_DECL(expected_vld4_2,hfloat,32,2) [] = { 0xc1400000, 0xc1300000 };
VECT_VAR_DECL(expected_vld4_2,int,8,16) [] = { 0x10, 0x11, 0x12, 0x13,
@ -348,6 +422,12 @@ VECT_VAR_DECL(expected_vld4_2,poly,8,16) [] = { 0x10, 0x11, 0x12, 0x13,
0x1c, 0x1d, 0x1e, 0x1f };
VECT_VAR_DECL(expected_vld4_2,poly,16,8) [] = { 0x0, 0x1, 0x2, 0x3,
0x4, 0x5, 0x6, 0x7 };
#if MFLOAT8_SUPPORTED
VECT_VAR_DECL(expected_vld4_2,hmfloat,8,16) [] = { 0x10, 0x11, 0x12, 0x13,
0x14, 0x15, 0x16, 0x17,
0x18, 0x19, 0x1a, 0x1b,
0x1c, 0x1d, 0x1e, 0x1f };
#endif
VECT_VAR_DECL(expected_vld4_2,hfloat,16,8) [] = { 0x0000, 0x3c00, 0x4000, 0x4200,
0x4400, 0x4500, 0x4600, 0x4700 };
VECT_VAR_DECL(expected_vld4_2,hfloat,32,4) [] = { 0xc1000000, 0xc0e00000,
@ -367,6 +447,10 @@ VECT_VAR_DECL(expected_vld4_3,uint,64,1) [] = { 0xfffffffffffffff3 };
VECT_VAR_DECL(expected_vld4_3,poly,8,8) [] = { 0x8, 0x9, 0xa, 0xb,
0xc, 0xd, 0xe, 0xf };
VECT_VAR_DECL(expected_vld4_3,poly,16,4) [] = { 0xfffc, 0xfffd, 0xfffe, 0xffff };
#if MFLOAT8_SUPPORTED
VECT_VAR_DECL(expected_vld4_3,hmfloat,8,8) [] = { 0x8, 0x9, 0xa, 0xb,
0xc, 0xd, 0xe, 0xf };
#endif
VECT_VAR_DECL(expected_vld4_3,hfloat,16,4) [] = { 0xc400, 0xc200, 0xc000, 0xbc00 };
VECT_VAR_DECL(expected_vld4_3,hfloat,32,2) [] = { 0xc1200000, 0xc1100000 };
VECT_VAR_DECL(expected_vld4_3,int,8,16) [] = { 0x20, 0x21, 0x22, 0x23,
@ -391,6 +475,12 @@ VECT_VAR_DECL(expected_vld4_3,poly,8,16) [] = { 0x20, 0x21, 0x22, 0x23,
0x2c, 0x2d, 0x2e, 0x2f };
VECT_VAR_DECL(expected_vld4_3,poly,16,8) [] = { 0x8, 0x9, 0xa, 0xb,
0xc, 0xd, 0xe, 0xf };
#if MFLOAT8_SUPPORTED
VECT_VAR_DECL(expected_vld4_3,hmfloat,8,16) [] = { 0x20, 0x21, 0x22, 0x23,
0x24, 0x25, 0x26, 0x27,
0x28, 0x29, 0x2a, 0x2b,
0x2c, 0x2d, 0x2e, 0x2f };
#endif
VECT_VAR_DECL(expected_vld4_3,hfloat,16,8) [] = { 0x4800, 0x4880, 0x4900, 0x4980,
0x4a00, 0x4a80, 0x4b00, 0x4b80 };
VECT_VAR_DECL(expected_vld4_3,hfloat,32,4) [] = { 0xc0800000, 0xc0400000,
@ -436,6 +526,7 @@ void exec_vldX (void)
DECL_VLDX(uint, 64, 1, X); \
DECL_VLDX(poly, 8, 8, X); \
DECL_VLDX(poly, 16, 4, X); \
MFLOAT8_ONLY(DECL_VLDX(mfloat, 8, 8, X)); \
DECL_VLDX(float, 32, 2, X); \
DECL_VLDX(int, 8, 16, X); \
DECL_VLDX(int, 16, 8, X); \
@ -445,6 +536,7 @@ void exec_vldX (void)
DECL_VLDX(uint, 32, 4, X); \
DECL_VLDX(poly, 8, 16, X); \
DECL_VLDX(poly, 16, 8, X); \
MFLOAT8_ONLY(DECL_VLDX(mfloat, 8, 16, X)); \
DECL_VLDX(float, 32, 4, X)
#if defined (__ARM_FP16_FORMAT_IEEE) || defined (__ARM_FP16_FORMAT_ALTERNATIVE)
@ -467,6 +559,7 @@ void exec_vldX (void)
TEST_VLDX(, uint, u, 64, 1, X); \
TEST_VLDX(, poly, p, 8, 8, X); \
TEST_VLDX(, poly, p, 16, 4, X); \
MFLOAT8_ONLY(TEST_VLDX(, mfloat, mf, 8, 8, X)); \
TEST_VLDX(, float, f, 32, 2, X); \
TEST_VLDX(q, int, s, 8, 16, X); \
TEST_VLDX(q, int, s, 16, 8, X); \
@ -476,6 +569,7 @@ void exec_vldX (void)
TEST_VLDX(q, uint, u, 32, 4, X); \
TEST_VLDX(q, poly, p, 8, 16, X); \
TEST_VLDX(q, poly, p, 16, 8, X); \
MFLOAT8_ONLY(TEST_VLDX(q, mfloat, mf, 8, 16, X)); \
TEST_VLDX(q, float, f, 32, 4, X)
#if defined (__ARM_FP16_FORMAT_IEEE) || defined (__ARM_FP16_FORMAT_ALTERNATIVE)
@ -498,6 +592,7 @@ void exec_vldX (void)
TEST_EXTRA_CHUNK(uint, 64, 1, X, Y); \
TEST_EXTRA_CHUNK(poly, 8, 8, X, Y); \
TEST_EXTRA_CHUNK(poly, 16, 4, X, Y); \
MFLOAT8_ONLY(TEST_EXTRA_CHUNK(mfloat, 8, 8, X, Y)); \
TEST_EXTRA_CHUNK(float, 32, 2, X, Y); \
TEST_EXTRA_CHUNK(int, 8, 16, X, Y); \
TEST_EXTRA_CHUNK(int, 16, 8, X, Y); \
@ -507,6 +602,7 @@ void exec_vldX (void)
TEST_EXTRA_CHUNK(uint, 32, 4, X, Y); \
TEST_EXTRA_CHUNK(poly, 8, 16, X, Y); \
TEST_EXTRA_CHUNK(poly, 16, 8, X, Y); \
MFLOAT8_ONLY(TEST_EXTRA_CHUNK(mfloat, 8, 16, X, Y)); \
TEST_EXTRA_CHUNK(float, 32, 4, X, Y)
#if defined (__ARM_FP16_FORMAT_IEEE) || defined (__ARM_FP16_FORMAT_ALTERNATIVE)
@ -530,6 +626,7 @@ void exec_vldX (void)
CHECK(test_name, uint, 64, 1, PRIx64, EXPECTED, comment); \
CHECK_POLY(test_name, poly, 8, 8, PRIx8, EXPECTED, comment); \
CHECK_POLY(test_name, poly, 16, 4, PRIx16, EXPECTED, comment); \
MFLOAT8_ONLY(CHECK_FP(test_name, mfloat, 8, 8, PRIx8, EXPECTED, comment)); \
CHECK_FP(test_name, float, 32, 2, PRIx32, EXPECTED, comment); \
\
CHECK(test_name, int, 8, 16, PRIx8, EXPECTED, comment); \
@ -540,6 +637,7 @@ void exec_vldX (void)
CHECK(test_name, uint, 32, 4, PRIx32, EXPECTED, comment); \
CHECK_POLY(test_name, poly, 8, 16, PRIx8, EXPECTED, comment); \
CHECK_POLY(test_name, poly, 16, 8, PRIx16, EXPECTED, comment); \
MFLOAT8_ONLY(CHECK_FP(test_name, mfloat, 8, 16, PRIx8, EXPECTED, comment)); \
CHECK_FP(test_name, float, 32, 4, PRIx32, EXPECTED, comment)
#if defined (__ARM_FP16_FORMAT_IEEE) || defined (__ARM_FP16_FORMAT_ALTERNATIVE)
@ -580,6 +678,12 @@ void exec_vldX (void)
PAD(buffer_vld2_pad, poly, 8, 8);
VECT_ARRAY_INIT2(buffer_vld2, poly, 16, 4);
PAD(buffer_vld2_pad, poly, 16, 4);
#if MFLOAT8_SUPPORTED
VECT_ARRAY(buffer_vld2, mfloat, 8, 8, 2);
__builtin_memcpy (VECT_ARRAY_VAR(buffer_vld2, mfloat, 8, 8, 2),
VECT_ARRAY_VAR(buffer_vld2, int, 8, 8, 2), 8 * 2);
PAD(buffer_vld2_pad, mfloat, 8, 8);
#endif
#if defined (__ARM_FP16_FORMAT_IEEE) || defined (__ARM_FP16_FORMAT_ALTERNATIVE)
VECT_ARRAY_INIT2(buffer_vld2, float, 16, 4);
PAD(buffer_vld2_pad, float, 16, 4);
@ -607,6 +711,12 @@ void exec_vldX (void)
PAD(buffer_vld2_pad, poly, 8, 16);
VECT_ARRAY_INIT2(buffer_vld2, poly, 16, 8);
PAD(buffer_vld2_pad, poly, 16, 8);
#if MFLOAT8_SUPPORTED
VECT_ARRAY(buffer_vld2, mfloat, 8, 16, 2);
PAD(buffer_vld2_pad, mfloat, 8, 16);
__builtin_memcpy (VECT_ARRAY_VAR(buffer_vld2, mfloat, 8, 16, 2),
VECT_ARRAY_VAR(buffer_vld2, int, 8, 16, 2), 16 * 2);
#endif
#if defined (__ARM_FP16_FORMAT_IEEE) || defined (__ARM_FP16_FORMAT_ALTERNATIVE)
VECT_ARRAY_INIT2(buffer_vld2, float, 16, 8);
PAD(buffer_vld2_pad, float, 16, 8);
@ -635,6 +745,12 @@ void exec_vldX (void)
PAD(buffer_vld3_pad, poly, 8, 8);
VECT_ARRAY_INIT3(buffer_vld3, poly, 16, 4);
PAD(buffer_vld3_pad, poly, 16, 4);
#if MFLOAT8_SUPPORTED
VECT_ARRAY(buffer_vld3, mfloat, 8, 8, 3);
PAD(buffer_vld3_pad, mfloat, 8, 8);
__builtin_memcpy (VECT_ARRAY_VAR(buffer_vld3, mfloat, 8, 8, 3),
VECT_ARRAY_VAR(buffer_vld3, int, 8, 8, 3), 8 * 3);
#endif
#if defined (__ARM_FP16_FORMAT_IEEE) || defined (__ARM_FP16_FORMAT_ALTERNATIVE)
VECT_ARRAY_INIT3(buffer_vld3, float, 16, 4);
PAD(buffer_vld3_pad, float, 16, 4);
@ -662,6 +778,12 @@ void exec_vldX (void)
PAD(buffer_vld3_pad, poly, 8, 16);
VECT_ARRAY_INIT3(buffer_vld3, poly, 16, 8);
PAD(buffer_vld3_pad, poly, 16, 8);
#if MFLOAT8_SUPPORTED
VECT_ARRAY(buffer_vld3, mfloat, 8, 16, 3);
PAD(buffer_vld3_pad, mfloat, 8, 16);
__builtin_memcpy (VECT_ARRAY_VAR(buffer_vld3, mfloat, 8, 16, 3),
VECT_ARRAY_VAR(buffer_vld3, int, 8, 16, 3), 16 * 3);
#endif
#if defined (__ARM_FP16_FORMAT_IEEE) || defined (__ARM_FP16_FORMAT_ALTERNATIVE)
VECT_ARRAY_INIT3(buffer_vld3, float, 16, 8);
PAD(buffer_vld3_pad, float, 16, 8);
@ -690,6 +812,12 @@ void exec_vldX (void)
PAD(buffer_vld4_pad, poly, 8, 8);
VECT_ARRAY_INIT4(buffer_vld4, poly, 16, 4);
PAD(buffer_vld4_pad, poly, 16, 4);
#if MFLOAT8_SUPPORTED
VECT_ARRAY(buffer_vld4, mfloat, 8, 8, 4);
PAD(buffer_vld4_pad, mfloat, 8, 8);
__builtin_memcpy (VECT_ARRAY_VAR(buffer_vld4, mfloat, 8, 8, 4),
VECT_ARRAY_VAR(buffer_vld4, int, 8, 8, 4), 8 * 4);
#endif
#if defined (__ARM_FP16_FORMAT_IEEE) || defined (__ARM_FP16_FORMAT_ALTERNATIVE)
VECT_ARRAY_INIT4(buffer_vld4, float, 16, 4);
PAD(buffer_vld4_pad, float, 16, 4);
@ -717,6 +845,12 @@ void exec_vldX (void)
PAD(buffer_vld4_pad, poly, 8, 16);
VECT_ARRAY_INIT4(buffer_vld4, poly, 16, 8);
PAD(buffer_vld4_pad, poly, 16, 8);
#if MFLOAT8_SUPPORTED
VECT_ARRAY(buffer_vld4, mfloat, 8, 16, 4);
PAD(buffer_vld4_pad, mfloat, 8, 16);
__builtin_memcpy (VECT_ARRAY_VAR(buffer_vld4, mfloat, 8, 16, 4),
VECT_ARRAY_VAR(buffer_vld4, int, 8, 16, 4), 16 * 4);
#endif
#if defined (__ARM_FP16_FORMAT_IEEE) || defined (__ARM_FP16_FORMAT_ALTERNATIVE)
VECT_ARRAY_INIT4(buffer_vld4, float, 16, 8);
PAD(buffer_vld4_pad, float, 16, 8);

View file

@ -18,6 +18,10 @@ VECT_VAR_DECL(expected_vld2_0,uint,64,1) [] = { 0xfffffffffffffff0 };
VECT_VAR_DECL(expected_vld2_0,poly,8,8) [] = { 0xf0, 0xf1, 0xf0, 0xf1,
0xf0, 0xf1, 0xf0, 0xf1 };
VECT_VAR_DECL(expected_vld2_0,poly,16,4) [] = { 0xfff0, 0xfff1, 0xfff0, 0xfff1 };
#if MFLOAT8_SUPPORTED
VECT_VAR_DECL(expected_vld2_0,hmfloat,8,8) [] = { 0xf0, 0xf1, 0xf0, 0xf1,
0xf0, 0xf1, 0xf0, 0xf1 };
#endif
VECT_VAR_DECL(expected_vld2_0,hfloat,16,4) [] = {0xcc00, 0xcb80, 0xcc00, 0xcb80 };
VECT_VAR_DECL(expected_vld2_0,hfloat,32,2) [] = { 0xc1800000, 0xc1700000 };
@ -36,6 +40,10 @@ VECT_VAR_DECL(expected_vld2_1,poly,8,8) [] = { 0xf0, 0xf1, 0xf0, 0xf1,
0xf0, 0xf1, 0xf0, 0xf1 };
VECT_VAR_DECL(expected_vld2_1,poly,16,4) [] = { 0xfff0, 0xfff1,
0xfff0, 0xfff1 };
#if MFLOAT8_SUPPORTED
VECT_VAR_DECL(expected_vld2_1,hmfloat,8,8) [] = { 0xf0, 0xf1, 0xf0, 0xf1,
0xf0, 0xf1, 0xf0, 0xf1 };
#endif
VECT_VAR_DECL(expected_vld2_1,hfloat,16,4) [] = { 0xcc00, 0xcb80, 0xcc00, 0xcb80 };
VECT_VAR_DECL(expected_vld2_1,hfloat,32,2) [] = { 0xc1800000, 0xc1700000 };
@ -56,6 +64,10 @@ VECT_VAR_DECL(expected_vld3_0,poly,8,8) [] = { 0xf0, 0xf1, 0xf2, 0xf0,
0xf1, 0xf2, 0xf0, 0xf1 };
VECT_VAR_DECL(expected_vld3_0,poly,16,4) [] = { 0xfff0, 0xfff1,
0xfff2, 0xfff0 };
#if MFLOAT8_SUPPORTED
VECT_VAR_DECL(expected_vld3_0,hmfloat,8,8) [] = { 0xf0, 0xf1, 0xf2, 0xf0,
0xf1, 0xf2, 0xf0, 0xf1 };
#endif
VECT_VAR_DECL(expected_vld3_0,hfloat,16,4) [] = { 0xcc00, 0xcb80, 0xcb00, 0xcc00 };
VECT_VAR_DECL(expected_vld3_0,hfloat,32,2) [] = { 0xc1800000, 0xc1700000 };
@ -76,6 +88,10 @@ VECT_VAR_DECL(expected_vld3_1,poly,8,8) [] = { 0xf2, 0xf0, 0xf1, 0xf2,
0xf0, 0xf1, 0xf2, 0xf0 };
VECT_VAR_DECL(expected_vld3_1,poly,16,4) [] = { 0xfff1, 0xfff2,
0xfff0, 0xfff1 };
#if MFLOAT8_SUPPORTED
VECT_VAR_DECL(expected_vld3_1,hmfloat,8,8) [] = { 0xf2, 0xf0, 0xf1, 0xf2,
0xf0, 0xf1, 0xf2, 0xf0 };
#endif
VECT_VAR_DECL(expected_vld3_1,hfloat,16,4) [] = { 0xcb80, 0xcb00, 0xcc00, 0xcb80 };
VECT_VAR_DECL(expected_vld3_1,hfloat,32,2) [] = { 0xc1600000, 0xc1800000 };
@ -96,6 +112,10 @@ VECT_VAR_DECL(expected_vld3_2,poly,8,8) [] = { 0xf1, 0xf2, 0xf0, 0xf1,
0xf2, 0xf0, 0xf1, 0xf2 };
VECT_VAR_DECL(expected_vld3_2,poly,16,4) [] = { 0xfff2, 0xfff0,
0xfff1, 0xfff2 };
#if MFLOAT8_SUPPORTED
VECT_VAR_DECL(expected_vld3_2,hmfloat,8,8) [] = { 0xf1, 0xf2, 0xf0, 0xf1,
0xf2, 0xf0, 0xf1, 0xf2 };
#endif
VECT_VAR_DECL(expected_vld3_2,hfloat,16,4) [] = { 0xcb00, 0xcc00, 0xcb80, 0xcb00 };
VECT_VAR_DECL(expected_vld3_2,hfloat,32,2) [] = { 0xc1700000, 0xc1600000 };
@ -114,6 +134,10 @@ VECT_VAR_DECL(expected_vld4_0,uint,64,1) [] = { 0xfffffffffffffff0 };
VECT_VAR_DECL(expected_vld4_0,poly,8,8) [] = { 0xf0, 0xf1, 0xf2, 0xf3,
0xf0, 0xf1, 0xf2, 0xf3 };
VECT_VAR_DECL(expected_vld4_0,poly,16,4) [] = { 0xfff0, 0xfff1, 0xfff2, 0xfff3 };
#if MFLOAT8_SUPPORTED
VECT_VAR_DECL(expected_vld4_0,hmfloat,8,8) [] = { 0xf0, 0xf1, 0xf2, 0xf3,
0xf0, 0xf1, 0xf2, 0xf3 };
#endif
VECT_VAR_DECL(expected_vld4_0,hfloat,16,4) [] = { 0xcc00, 0xcb80, 0xcb00, 0xca80 };
VECT_VAR_DECL(expected_vld4_0,hfloat,32,2) [] = { 0xc1800000, 0xc1700000 };
@ -131,6 +155,10 @@ VECT_VAR_DECL(expected_vld4_1,uint,64,1) [] = { 0xfffffffffffffff1 };
VECT_VAR_DECL(expected_vld4_1,poly,8,8) [] = { 0xf0, 0xf1, 0xf2, 0xf3,
0xf0, 0xf1, 0xf2, 0xf3 };
VECT_VAR_DECL(expected_vld4_1,poly,16,4) [] = { 0xfff0, 0xfff1, 0xfff2, 0xfff3 };
#if MFLOAT8_SUPPORTED
VECT_VAR_DECL(expected_vld4_1,hmfloat,8,8) [] = { 0xf0, 0xf1, 0xf2, 0xf3,
0xf0, 0xf1, 0xf2, 0xf3 };
#endif
VECT_VAR_DECL(expected_vld4_1,hfloat,16,4) [] = { 0xcc00, 0xcb80, 0xcb00, 0xca80 };
VECT_VAR_DECL(expected_vld4_1,hfloat,32,2) [] = { 0xc1600000, 0xc1500000 };
@ -148,6 +176,10 @@ VECT_VAR_DECL(expected_vld4_2,uint,64,1) [] = { 0xfffffffffffffff2 };
VECT_VAR_DECL(expected_vld4_2,poly,8,8) [] = { 0xf0, 0xf1, 0xf2, 0xf3,
0xf0, 0xf1, 0xf2, 0xf3 };
VECT_VAR_DECL(expected_vld4_2,poly,16,4) [] = { 0xfff0, 0xfff1, 0xfff2, 0xfff3 };
#if MFLOAT8_SUPPORTED
VECT_VAR_DECL(expected_vld4_2,hmfloat,8,8) [] = { 0xf0, 0xf1, 0xf2, 0xf3,
0xf0, 0xf1, 0xf2, 0xf3 };
#endif
VECT_VAR_DECL(expected_vld4_2,hfloat,16,4) [] = { 0xcc00, 0xcb80, 0xcb00, 0xca80 };
VECT_VAR_DECL(expected_vld4_2,hfloat,32,2) [] = { 0xc1800000, 0xc1700000 };
@ -165,6 +197,10 @@ VECT_VAR_DECL(expected_vld4_3,uint,64,1) [] = { 0xfffffffffffffff3 };
VECT_VAR_DECL(expected_vld4_3,poly,8,8) [] = { 0xf0, 0xf1, 0xf2, 0xf3,
0xf0, 0xf1, 0xf2, 0xf3 };
VECT_VAR_DECL(expected_vld4_3,poly,16,4) [] = { 0xfff0, 0xfff1, 0xfff2, 0xfff3 };
#if MFLOAT8_SUPPORTED
VECT_VAR_DECL(expected_vld4_3,hmfloat,8,8) [] = { 0xf0, 0xf1, 0xf2, 0xf3,
0xf0, 0xf1, 0xf2, 0xf3 };
#endif
VECT_VAR_DECL(expected_vld4_3,hfloat,16,4) [] = { 0xcc00, 0xcb80, 0xcb00, 0xca80 };
VECT_VAR_DECL(expected_vld4_3,hfloat,32,2) [] = { 0xc1600000, 0xc1500000 };
@ -208,6 +244,7 @@ void exec_vldX_dup (void)
DECL_VLDX_DUP(uint, 64, 1, X); \
DECL_VLDX_DUP(poly, 8, 8, X); \
DECL_VLDX_DUP(poly, 16, 4, X); \
MFLOAT8_ONLY(DECL_VLDX_DUP(mfloat, 8, 8, X)); \
DECL_VLDX_DUP(float, 32, 2, X)
#if defined (__ARM_FP16_FORMAT_IEEE) || defined (__ARM_FP16_FORMAT_ALTERNATIVE)
@ -229,6 +266,7 @@ void exec_vldX_dup (void)
TEST_VLDX_DUP(, uint, u, 64, 1, X); \
TEST_VLDX_DUP(, poly, p, 8, 8, X); \
TEST_VLDX_DUP(, poly, p, 16, 4, X); \
MFLOAT8_ONLY(TEST_VLDX_DUP(, mfloat, mf, 8, 8, X)); \
TEST_VLDX_DUP(, float, f, 32, 2, X)
#if defined (__ARM_FP16_FORMAT_IEEE) || defined (__ARM_FP16_FORMAT_ALTERNATIVE)
@ -250,6 +288,7 @@ void exec_vldX_dup (void)
TEST_EXTRA_CHUNK(uint, 64, 1, X, Y); \
TEST_EXTRA_CHUNK(poly, 8, 8, X, Y); \
TEST_EXTRA_CHUNK(poly, 16, 4, X, Y); \
MFLOAT8_ONLY(TEST_EXTRA_CHUNK(mfloat, 8, 8, X, Y)); \
TEST_EXTRA_CHUNK(float, 32, 2, X, Y)
#if defined (__ARM_FP16_FORMAT_IEEE) || defined (__ARM_FP16_FORMAT_ALTERNATIVE)
@ -272,6 +311,7 @@ void exec_vldX_dup (void)
CHECK(test_name, uint, 64, 1, PRIx64, EXPECTED, comment); \
CHECK_POLY(test_name, poly, 8, 8, PRIx8, EXPECTED, comment); \
CHECK_POLY(test_name, poly, 16, 4, PRIx16, EXPECTED, comment); \
MFLOAT8_ONLY(CHECK_FP(test_name, mfloat, 8, 8, PRIx8, EXPECTED, comment)); \
CHECK_FP(test_name, float, 32, 2, PRIx32, EXPECTED, comment)
#if defined (__ARM_FP16_FORMAT_IEEE) || defined (__ARM_FP16_FORMAT_ALTERNATIVE)
@ -313,6 +353,12 @@ void exec_vldX_dup (void)
PAD(buffer_vld2_pad, poly, 8, 8);
VECT_ARRAY_INIT2(buffer_vld2, poly, 16, 4);
PAD(buffer_vld2_pad, poly, 16, 4);
#if MFLOAT8_SUPPORTED
VECT_ARRAY(buffer_vld2, mfloat, 8, 8, 2);
__builtin_memcpy (VECT_ARRAY_VAR(buffer_vld2, mfloat, 8, 8, 2),
VECT_ARRAY_VAR(buffer_vld2, int, 8, 8, 2), 8 * 2);
PAD(buffer_vld2_pad, mfloat, 8, 8);
#endif
#if defined (__ARM_FP16_FORMAT_IEEE) || defined (__ARM_FP16_FORMAT_ALTERNATIVE)
VECT_ARRAY_INIT2(buffer_vld2, float, 16, 4);
PAD(buffer_vld2_pad, float, 16, 4);
@ -340,6 +386,12 @@ void exec_vldX_dup (void)
PAD(buffer_vld2_pad, poly, 8, 16);
VECT_ARRAY_INIT2(buffer_vld2, poly, 16, 8);
PAD(buffer_vld2_pad, poly, 16, 8);
#if MFLOAT8_SUPPORTED
VECT_ARRAY(buffer_vld2, mfloat, 8, 16, 2);
PAD(buffer_vld2_pad, mfloat, 8, 16);
__builtin_memcpy (VECT_ARRAY_VAR(buffer_vld2, mfloat, 8, 16, 2),
VECT_ARRAY_VAR(buffer_vld2, int, 8, 16, 2), 16 * 2);
#endif
#if defined (__ARM_FP16_FORMAT_IEEE) || defined (__ARM_FP16_FORMAT_ALTERNATIVE)
VECT_ARRAY_INIT2(buffer_vld2, float, 16, 8);
PAD(buffer_vld2_pad, float, 16, 8);
@ -368,6 +420,12 @@ void exec_vldX_dup (void)
PAD(buffer_vld3_pad, poly, 8, 8);
VECT_ARRAY_INIT3(buffer_vld3, poly, 16, 4);
PAD(buffer_vld3_pad, poly, 16, 4);
#if MFLOAT8_SUPPORTED
VECT_ARRAY(buffer_vld3, mfloat, 8, 8, 3);
PAD(buffer_vld3_pad, mfloat, 8, 8);
__builtin_memcpy (VECT_ARRAY_VAR(buffer_vld3, mfloat, 8, 8, 3),
VECT_ARRAY_VAR(buffer_vld3, int, 8, 8, 3), 8 * 3);
#endif
#if defined (__ARM_FP16_FORMAT_IEEE) || defined (__ARM_FP16_FORMAT_ALTERNATIVE)
VECT_ARRAY_INIT3(buffer_vld3, float, 16, 4);
PAD(buffer_vld3_pad, float, 16, 4);
@ -395,6 +453,12 @@ void exec_vldX_dup (void)
PAD(buffer_vld3_pad, poly, 8, 16);
VECT_ARRAY_INIT3(buffer_vld3, poly, 16, 8);
PAD(buffer_vld3_pad, poly, 16, 8);
#if MFLOAT8_SUPPORTED
VECT_ARRAY(buffer_vld3, mfloat, 8, 16, 3);
PAD(buffer_vld3_pad, mfloat, 8, 16);
__builtin_memcpy (VECT_ARRAY_VAR(buffer_vld3, mfloat, 8, 16, 3),
VECT_ARRAY_VAR(buffer_vld3, int, 8, 16, 3), 16 * 3);
#endif
#if defined (__ARM_FP16_FORMAT_IEEE) || defined (__ARM_FP16_FORMAT_ALTERNATIVE)
VECT_ARRAY_INIT3(buffer_vld3, float, 16, 8);
PAD(buffer_vld3_pad, float, 16, 8);
@ -423,6 +487,12 @@ void exec_vldX_dup (void)
PAD(buffer_vld4_pad, poly, 8, 8);
VECT_ARRAY_INIT4(buffer_vld4, poly, 16, 4);
PAD(buffer_vld4_pad, poly, 16, 4);
#if MFLOAT8_SUPPORTED
VECT_ARRAY(buffer_vld4, mfloat, 8, 8, 4);
PAD(buffer_vld4_pad, mfloat, 8, 8);
__builtin_memcpy (VECT_ARRAY_VAR(buffer_vld4, mfloat, 8, 8, 4),
VECT_ARRAY_VAR(buffer_vld4, int, 8, 8, 4), 8 * 4);
#endif
#if defined (__ARM_FP16_FORMAT_IEEE) || defined (__ARM_FP16_FORMAT_ALTERNATIVE)
VECT_ARRAY_INIT4(buffer_vld4, float, 16, 4);
PAD(buffer_vld4_pad, float, 16, 4);
@ -450,6 +520,12 @@ void exec_vldX_dup (void)
PAD(buffer_vld4_pad, poly, 8, 16);
VECT_ARRAY_INIT4(buffer_vld4, poly, 16, 8);
PAD(buffer_vld4_pad, poly, 16, 8);
#if MFLOAT8_SUPPORTED
VECT_ARRAY(buffer_vld4, mfloat, 8, 16, 4);
PAD(buffer_vld4_pad, mfloat, 8, 16);
__builtin_memcpy (VECT_ARRAY_VAR(buffer_vld4, mfloat, 8, 16, 4),
VECT_ARRAY_VAR(buffer_vld4, int, 8, 16, 4), 16 * 4);
#endif
#if defined (__ARM_FP16_FORMAT_IEEE) || defined (__ARM_FP16_FORMAT_ALTERNATIVE)
VECT_ARRAY_INIT4(buffer_vld4, float, 16, 8);
PAD(buffer_vld4_pad, float, 16, 8);

View file

@ -18,6 +18,10 @@ VECT_VAR_DECL(expected_vld2_0,poly,8,8) [] = { 0xaa, 0xaa, 0xaa, 0xaa,
0xaa, 0xaa, 0xaa, 0xaa };
VECT_VAR_DECL(expected_vld2_0,poly,16,4) [] = { 0xaaaa, 0xaaaa,
0xaaaa, 0xaaaa };
#if MFLOAT8_SUPPORTED
VECT_VAR_DECL(expected_vld2_0,hmfloat,8,8) [] = { 0xaa, 0xaa, 0xaa, 0xaa,
0xaa, 0xaa, 0xaa, 0xaa };
#endif
VECT_VAR_DECL(expected_vld2_0,hfloat,16,4) [] = { 0xaaaa, 0xaaaa, 0xaaaa, 0xaaaa };
VECT_VAR_DECL(expected_vld2_0,hfloat,32,2) [] = { 0xc1800000, 0xc1700000 };
VECT_VAR_DECL(expected_vld2_0,int,16,8) [] = { 0xaaaa, 0xaaaa, 0xaaaa, 0xaaaa,
@ -47,6 +51,10 @@ VECT_VAR_DECL(expected_vld2_1,uint,32,2) [] = { 0xfffffff0, 0xfffffff1 };
VECT_VAR_DECL(expected_vld2_1,poly,8,8) [] = { 0xf0, 0xf1, 0xaa, 0xaa,
0xaa, 0xaa, 0xaa, 0xaa };
VECT_VAR_DECL(expected_vld2_1,poly,16,4) [] = { 0xaaaa, 0xaaaa, 0xfff0, 0xfff1 };
#if MFLOAT8_SUPPORTED
VECT_VAR_DECL(expected_vld2_1,hmfloat,8,8) [] = { 0xaa, 0xaa, 0xaa, 0xaa,
0xaa, 0xaa, 0xf0, 0xf1 };
#endif
VECT_VAR_DECL(expected_vld2_1,hfloat,16,4) [] = { 0xcc00, 0xcb80, 0xaaaa, 0xaaaa };
VECT_VAR_DECL(expected_vld2_1,hfloat,32,2) [] = { 0xaaaaaaaa, 0xaaaaaaaa };
VECT_VAR_DECL(expected_vld2_1,int,16,8) [] = { 0xaaaa, 0xaaaa, 0xaaaa, 0xaaaa,
@ -76,6 +84,10 @@ VECT_VAR_DECL(expected_vld3_0,uint,32,2) [] = { 0xaaaaaaaa, 0xaaaaaaaa };
VECT_VAR_DECL(expected_vld3_0,poly,8,8) [] = { 0xaa, 0xaa, 0xaa, 0xaa,
0xaa, 0xaa, 0xaa, 0xaa };
VECT_VAR_DECL(expected_vld3_0,poly,16,4) [] = { 0xaaaa, 0xaaaa, 0xaaaa, 0xaaaa };
#if MFLOAT8_SUPPORTED
VECT_VAR_DECL(expected_vld3_0,hmfloat,8,8) [] = { 0xaa, 0xaa, 0xaa, 0xaa,
0xaa, 0xaa, 0xaa, 0xaa };
#endif
VECT_VAR_DECL(expected_vld3_0,hfloat,16,4) [] = { 0xaaaa, 0xaaaa, 0xaaaa, 0xaaaa };
VECT_VAR_DECL(expected_vld3_0,hfloat,32,2) [] = { 0xc1800000, 0xc1700000 };
VECT_VAR_DECL(expected_vld3_0,int,16,8) [] = { 0xaaaa, 0xaaaa, 0xaaaa, 0xaaaa,
@ -105,6 +117,10 @@ VECT_VAR_DECL(expected_vld3_1,uint,32,2) [] = { 0xaaaaaaaa, 0xfffffff0 };
VECT_VAR_DECL(expected_vld3_1,poly,8,8) [] = { 0xaa, 0xaa, 0xaa, 0xaa,
0xf0, 0xf1, 0xf2, 0xaa };
VECT_VAR_DECL(expected_vld3_1,poly,16,4) [] = { 0xaaaa, 0xaaaa, 0xaaaa, 0xaaaa };
#if MFLOAT8_SUPPORTED
VECT_VAR_DECL(expected_vld3_1,hmfloat,8,8) [] = { 0xaa, 0xaa, 0xaa, 0xaa,
0xaa, 0xaa, 0xaa, 0xaa };
#endif
VECT_VAR_DECL(expected_vld3_1,hfloat,16,4) [] = { 0xaaaa, 0xaaaa, 0xcc00, 0xcb80 };
VECT_VAR_DECL(expected_vld3_1,hfloat,32,2) [] = { 0xc1600000, 0xaaaaaaaa };
VECT_VAR_DECL(expected_vld3_1,int,16,8) [] = { 0xaaaa, 0xaaaa, 0xaaaa, 0xaaaa,
@ -134,6 +150,10 @@ VECT_VAR_DECL(expected_vld3_2,uint,32,2) [] = { 0xfffffff1, 0xfffffff2 };
VECT_VAR_DECL(expected_vld3_2,poly,8,8) [] = { 0xaa, 0xaa, 0xaa, 0xaa,
0xaa, 0xaa, 0xaa, 0xaa };
VECT_VAR_DECL(expected_vld3_2,poly,16,4) [] = { 0xaaaa, 0xfff0, 0xfff1, 0xfff2 };
#if MFLOAT8_SUPPORTED
VECT_VAR_DECL(expected_vld3_2,hmfloat,8,8) [] = { 0xaa, 0xaa, 0xaa, 0xaa,
0xaa, 0xf0, 0xf1, 0xf2 };
#endif
VECT_VAR_DECL(expected_vld3_2,hfloat,16,4) [] = { 0xcb00, 0xaaaa, 0xaaaa, 0xaaaa };
VECT_VAR_DECL(expected_vld3_2,hfloat,32,2) [] = { 0xaaaaaaaa, 0xaaaaaaaa };
VECT_VAR_DECL(expected_vld3_2,int,16,8) [] = { 0xaaaa, 0xaaaa, 0xfff0, 0xfff1,
@ -164,6 +184,10 @@ VECT_VAR_DECL(expected_vld4_0,poly,8,8) [] = { 0xaa, 0xaa, 0xaa, 0xaa,
0xaa, 0xaa, 0xaa, 0xaa };
VECT_VAR_DECL(expected_vld4_0,poly,16,4) [] = { 0xaaaa, 0xaaaa, 0xaaaa, 0xaaaa };
VECT_VAR_DECL(expected_vld4_0,hfloat,16,4) [] = { 0xaaaa, 0xaaaa, 0xaaaa, 0xaaaa };
#if MFLOAT8_SUPPORTED
VECT_VAR_DECL(expected_vld4_0,hmfloat,8,8) [] = { 0xaa, 0xaa, 0xaa, 0xaa,
0xaa, 0xaa, 0xaa, 0xaa };
#endif
VECT_VAR_DECL(expected_vld4_0,hfloat,32,2) [] = { 0xc1800000, 0xc1700000 };
VECT_VAR_DECL(expected_vld4_0,int,16,8) [] = { 0xaaaa, 0xaaaa, 0xaaaa, 0xaaaa,
0xaaaa, 0xaaaa, 0xaaaa, 0xaaaa };
@ -192,6 +216,10 @@ VECT_VAR_DECL(expected_vld4_1,uint,32,2) [] = { 0xaaaaaaaa, 0xaaaaaaaa };
VECT_VAR_DECL(expected_vld4_1,poly,8,8) [] = { 0xaa, 0xaa, 0xaa, 0xaa,
0xaa, 0xaa, 0xaa, 0xaa };
VECT_VAR_DECL(expected_vld4_1,poly,16,4) [] = { 0xaaaa, 0xaaaa, 0xaaaa, 0xaaaa };
#if MFLOAT8_SUPPORTED
VECT_VAR_DECL(expected_vld4_1,hmfloat,8,8) [] = { 0xaa, 0xaa, 0xaa, 0xaa,
0xaa, 0xaa, 0xaa, 0xaa };
#endif
VECT_VAR_DECL(expected_vld4_1,hfloat,16,4) [] = { 0xaaaa, 0xaaaa, 0xaaaa, 0xaaaa };
VECT_VAR_DECL(expected_vld4_1,hfloat,32,2) [] = { 0xc1600000, 0xc1500000 };
VECT_VAR_DECL(expected_vld4_1,int,16,8) [] = { 0xaaaa, 0xaaaa, 0xaaaa, 0xaaaa,
@ -221,6 +249,10 @@ VECT_VAR_DECL(expected_vld4_2,uint,32,2) [] = { 0xfffffff0, 0xfffffff1 };
VECT_VAR_DECL(expected_vld4_2,poly,8,8) [] = { 0xf0, 0xf1, 0xf2, 0xf3,
0xaa, 0xaa, 0xaa, 0xaa };
VECT_VAR_DECL(expected_vld4_2,poly,16,4) [] = { 0xaaaa, 0xaaaa, 0xaaaa, 0xaaaa };
#if MFLOAT8_SUPPORTED
VECT_VAR_DECL(expected_vld4_2,hmfloat,8,8) [] = { 0xaa, 0xaa, 0xaa, 0xaa,
0xaa, 0xaa, 0xaa, 0xaa };
#endif
VECT_VAR_DECL(expected_vld4_2,hfloat,16,4) [] = { 0xcc00, 0xcb80, 0xcb00, 0xca80 };
VECT_VAR_DECL(expected_vld4_2,hfloat,32,2) [] = { 0xaaaaaaaa, 0xaaaaaaaa };
VECT_VAR_DECL(expected_vld4_2,int,16,8) [] = { 0xaaaa, 0xaaaa, 0xaaaa, 0xaaaa,
@ -250,6 +282,10 @@ VECT_VAR_DECL(expected_vld4_3,uint,32,2) [] = { 0xfffffff2, 0xfffffff3 };
VECT_VAR_DECL(expected_vld4_3,poly,8,8) [] = { 0xaa, 0xaa, 0xaa, 0xaa,
0xaa, 0xaa, 0xaa, 0xaa };
VECT_VAR_DECL(expected_vld4_3,poly,16,4) [] = { 0xfff0, 0xfff1, 0xfff2, 0xfff3 };
#if MFLOAT8_SUPPORTED
VECT_VAR_DECL(expected_vld4_3,hmfloat,8,8) [] = { 0xaa, 0xaa, 0xaa, 0xaa,
0xf0, 0xf1, 0xf2, 0xf3 };
#endif
VECT_VAR_DECL(expected_vld4_3,hfloat,16,4) [] = { 0xaaaa, 0xaaaa, 0xaaaa, 0xaaaa };
VECT_VAR_DECL(expected_vld4_3,hfloat,32,2) [] = { 0xaaaaaaaa, 0xaaaaaaaa };
VECT_VAR_DECL(expected_vld4_3,int,16,8) [] = { 0xfff0, 0xfff1, 0xfff2, 0xfff3,
@ -279,6 +315,9 @@ VECT_VAR_DECL_INIT(buffer_vld2_lane, uint, 32, 2);
VECT_VAR_DECL_INIT(buffer_vld2_lane, uint, 64, 2);
VECT_VAR_DECL_INIT(buffer_vld2_lane, poly, 8, 2);
VECT_VAR_DECL_INIT(buffer_vld2_lane, poly, 16, 2);
#if MFLOAT8_SUPPORTED
VECT_VAR_DECL(buffer_vld2_lane, mfloat, 8, 2)[2];
#endif
#if defined (__ARM_FP16_FORMAT_IEEE) || defined (__ARM_FP16_FORMAT_ALTERNATIVE)
VECT_VAR_DECL_INIT(buffer_vld2_lane, float, 16, 2);
#endif
@ -295,6 +334,9 @@ VECT_VAR_DECL_INIT(buffer_vld3_lane, uint, 32, 3);
VECT_VAR_DECL_INIT(buffer_vld3_lane, uint, 64, 3);
VECT_VAR_DECL_INIT(buffer_vld3_lane, poly, 8, 3);
VECT_VAR_DECL_INIT(buffer_vld3_lane, poly, 16, 3);
#if MFLOAT8_SUPPORTED
VECT_VAR_DECL(buffer_vld3_lane, mfloat, 8, 3)[3];
#endif
#if defined (__ARM_FP16_FORMAT_IEEE) || defined (__ARM_FP16_FORMAT_ALTERNATIVE)
VECT_VAR_DECL_INIT(buffer_vld3_lane, float, 16, 3);
#endif
@ -311,6 +353,9 @@ VECT_VAR_DECL_INIT(buffer_vld4_lane, uint, 32, 4);
VECT_VAR_DECL_INIT(buffer_vld4_lane, uint, 64, 4);
VECT_VAR_DECL_INIT(buffer_vld4_lane, poly, 8, 4);
VECT_VAR_DECL_INIT(buffer_vld4_lane, poly, 16, 4);
#if MFLOAT8_SUPPORTED
VECT_VAR_DECL(buffer_vld4_lane, mfloat, 8, 4)[4];
#endif
#if defined (__ARM_FP16_FORMAT_IEEE) || defined (__ARM_FP16_FORMAT_ALTERNATIVE)
VECT_VAR_DECL_INIT(buffer_vld4_lane, float, 16, 4);
#endif
@ -371,6 +416,7 @@ void exec_vldX_lane (void)
DECL_VLDX_LANE(uint, 16, 8, X); \
DECL_VLDX_LANE(uint, 32, 4, X); \
DECL_VLDX_LANE(poly, 16, 8, X); \
MFLOAT8_ONLY(DECL_VLDX_LANE(mfloat, 8, 8, X)); \
DECL_VLDX_LANE(float, 32, 2, X); \
DECL_VLDX_LANE(float, 32, 4, X)
@ -384,9 +430,9 @@ void exec_vldX_lane (void)
#endif
/* Add some padding to try to catch out of bound accesses. */
#define ARRAY1(V, T, W, N) VECT_VAR_DECL(V,T,W,N)[1]={42}
#define ARRAY1(V, T, W, N) VECT_VAR_DECL(V,T,W,N)[1]={CONVERT(T##W##_t,42)}
#define DUMMY_ARRAY(V, T, W, N, L) \
VECT_VAR_DECL(V,T,W,N)[N*L]={0}; \
VECT_VAR_DECL(V,T,W,N)[N*L]={}; \
ARRAY1(V##_pad,T,W,N)
/* Use the same lanes regardless of the size of the array (X), for
@ -405,6 +451,7 @@ void exec_vldX_lane (void)
TEST_VLDX_LANE(q, uint, u, 16, 8, X, 5); \
TEST_VLDX_LANE(q, uint, u, 32, 4, X, 0); \
TEST_VLDX_LANE(q, poly, p, 16, 8, X, 5); \
MFLOAT8_ONLY(TEST_VLDX_LANE(, mfloat, mf, 8, 8, X, 7)); \
TEST_VLDX_LANE(, float, f, 32, 2, X, 0); \
TEST_VLDX_LANE(q, float, f, 32, 4, X, 2)
@ -431,6 +478,7 @@ void exec_vldX_lane (void)
TEST_EXTRA_CHUNK(uint, 16, 8, X, Y); \
TEST_EXTRA_CHUNK(uint, 32, 4, X, Y); \
TEST_EXTRA_CHUNK(poly, 16, 8, X, Y); \
MFLOAT8_ONLY(TEST_EXTRA_CHUNK(mfloat, 8, 8, X, Y)); \
TEST_EXTRA_CHUNK(float, 32, 2, X, Y); \
TEST_EXTRA_CHUNK(float, 32, 4, X, Y)
@ -453,6 +501,7 @@ void exec_vldX_lane (void)
CHECK(test_name, uint, 32, 2, PRIx32, EXPECTED, comment); \
CHECK_POLY(test_name, poly, 8, 8, PRIx8, EXPECTED, comment); \
CHECK_POLY(test_name, poly, 16, 4, PRIx16, EXPECTED, comment); \
MFLOAT8_ONLY(CHECK_FP(test_name, mfloat, 8, 8, PRIx8, EXPECTED, comment)); \
CHECK_FP(test_name, float, 32, 2, PRIx32, EXPECTED, comment); \
CHECK(test_name, int, 16, 8, PRIx16, EXPECTED, comment); \
CHECK(test_name, int, 32, 4, PRIx32, EXPECTED, comment); \
@ -475,6 +524,15 @@ void exec_vldX_lane (void)
}
#endif
#if MFLOAT8_SUPPORTED
__builtin_memcpy (VECT_VAR(buffer_vld2_lane, mfloat, 8, 2),
VECT_VAR(buffer_vld2_lane, int, 8, 2), 2);
__builtin_memcpy (VECT_VAR(buffer_vld3_lane, mfloat, 8, 3),
VECT_VAR(buffer_vld3_lane, int, 8, 3), 3);
__builtin_memcpy (VECT_VAR(buffer_vld4_lane, mfloat, 8, 4),
VECT_VAR(buffer_vld4_lane, int, 8, 4), 4);
#endif
/* Declare the temporary buffers / variables. */
DECL_ALL_VLDX_LANE(2);
DECL_ALL_VLDX_LANE(3);
@ -494,6 +552,9 @@ void exec_vldX_lane (void)
DUMMY_ARRAY(buffer_src, uint, 16, 8, 4);
DUMMY_ARRAY(buffer_src, uint, 32, 4, 4);
DUMMY_ARRAY(buffer_src, poly, 16, 8, 4);
#if MFLOAT8_SUPPORTED
DUMMY_ARRAY(buffer_src, mfloat, 8, 8, 4);
#endif
#if defined (__ARM_FP16_FORMAT_IEEE) || defined (__ARM_FP16_FORMAT_ALTERNATIVE)
DUMMY_ARRAY(buffer_src, float, 16, 4, 4);
DUMMY_ARRAY(buffer_src, float, 16, 8, 4);

View file

@ -21,6 +21,10 @@ VECT_VAR_DECL(expected_vrev16,poly,8,16) [] = { 0xf1, 0xf0, 0xf3, 0xf2,
0xf5, 0xf4, 0xf7, 0xf6,
0xf9, 0xf8, 0xfb, 0xfa,
0xfd, 0xfc, 0xff, 0xfe };
#if MFLOAT8_SUPPORTED
VECT_VAR_DECL(expected_vrev16,hmfloat,8,8) [] = { 0xf1, 0xf0, 0xf3, 0xf2,
0xf5, 0xf4, 0xf7, 0xf6 };
#endif
/* Expected results for vrev32. */
VECT_VAR_DECL(expected_vrev32,int,8,8) [] = { 0xf3, 0xf2, 0xf1, 0xf0,
@ -32,6 +36,10 @@ VECT_VAR_DECL(expected_vrev32,uint,16,4) [] = { 0xfff1, 0xfff0, 0xfff3, 0xfff2 }
VECT_VAR_DECL(expected_vrev32,poly,8,8) [] = { 0xf3, 0xf2, 0xf1, 0xf0,
0xf7, 0xf6, 0xf5, 0xf4 };
VECT_VAR_DECL(expected_vrev32,poly,16,4) [] = { 0xfff1, 0xfff0, 0xfff3, 0xfff2 };
#if MFLOAT8_SUPPORTED
VECT_VAR_DECL(expected_vrev32,hmfloat,8,8) [] = { 0xf3, 0xf2, 0xf1, 0xf0,
0xf7, 0xf6, 0xf5, 0xf4 };
#endif
VECT_VAR_DECL(expected_vrev32,int,8,16) [] = { 0xf3, 0xf2, 0xf1, 0xf0,
0xf7, 0xf6, 0xf5, 0xf4,
0xfb, 0xfa, 0xf9, 0xf8,
@ -50,6 +58,12 @@ VECT_VAR_DECL(expected_vrev32,poly,8,16) [] = { 0xf3, 0xf2, 0xf1, 0xf0,
0xff, 0xfe, 0xfd, 0xfc };
VECT_VAR_DECL(expected_vrev32,poly,16,8) [] = { 0xfff1, 0xfff0, 0xfff3, 0xfff2,
0xfff5, 0xfff4, 0xfff7, 0xfff6 };
#if MFLOAT8_SUPPORTED
VECT_VAR_DECL(expected_vrev32,hmfloat,8,16) [] = { 0xf3, 0xf2, 0xf1, 0xf0,
0xf7, 0xf6, 0xf5, 0xf4,
0xfb, 0xfa, 0xf9, 0xf8,
0xff, 0xfe, 0xfd, 0xfc };
#endif
/* Expected results for vrev64. */
VECT_VAR_DECL(expected_vrev64,int,8,8) [] = { 0xf7, 0xf6, 0xf5, 0xf4,
@ -63,6 +77,10 @@ VECT_VAR_DECL(expected_vrev64,uint,32,2) [] = { 0xfffffff1, 0xfffffff0 };
VECT_VAR_DECL(expected_vrev64,poly,8,8) [] = { 0xf7, 0xf6, 0xf5, 0xf4,
0xf3, 0xf2, 0xf1, 0xf0 };
VECT_VAR_DECL(expected_vrev64,poly,16,4) [] = { 0xfff3, 0xfff2, 0xfff1, 0xfff0 };
#if MFLOAT8_SUPPORTED
VECT_VAR_DECL(expected_vrev64,hmfloat,8,8) [] = { 0xf7, 0xf6, 0xf5, 0xf4,
0xf3, 0xf2, 0xf1, 0xf0 };
#endif
#if defined (FP16_SUPPORTED)
VECT_VAR_DECL (expected_vrev64, hfloat, 16, 4) [] = { 0xca80, 0xcb00,
0xcb80, 0xcc00 };
@ -90,6 +108,12 @@ VECT_VAR_DECL(expected_vrev64,poly,8,16) [] = { 0xf7, 0xf6, 0xf5, 0xf4,
0xfb, 0xfa, 0xf9, 0xf8 };
VECT_VAR_DECL(expected_vrev64,poly,16,8) [] = { 0xfff3, 0xfff2, 0xfff1, 0xfff0,
0xfff7, 0xfff6, 0xfff5, 0xfff4 };
#if MFLOAT8_SUPPORTED
VECT_VAR_DECL(expected_vrev64,hmfloat,8,16) [] = { 0xf7, 0xf6, 0xf5, 0xf4,
0xf3, 0xf2, 0xf1, 0xf0,
0xff, 0xfe, 0xfd, 0xfc,
0xfb, 0xfa, 0xf9, 0xf8 };
#endif
#if defined (FP16_SUPPORTED)
VECT_VAR_DECL (expected_vrev64, hfloat, 16, 8) [] = { 0xca80, 0xcb00,
0xcb80, 0xcc00,
@ -114,6 +138,10 @@ void exec_vrev (void)
/* Initialize input "vector" from "buffer". */
TEST_MACRO_ALL_VARIANTS_2_5(VLOAD, vector, buffer);
#if MFLOAT8_SUPPORTED
VLOAD (vector, buffer, , mfloat, mf, 8, 8);
VLOAD (vector, buffer, q, mfloat, mf, 8, 16);
#endif
#if defined (FP16_SUPPORTED)
VLOAD (vector, buffer, , float, f, 16, 4);
VLOAD (vector, buffer, q, float, f, 16, 8);
@ -129,6 +157,7 @@ void exec_vrev (void)
TEST_VREV(q, int, s, 8, 16, 16);
TEST_VREV(q, uint, u, 8, 16, 16);
TEST_VREV(q, poly, p, 8, 16, 16);
MFLOAT8_ONLY(TEST_VREV(, mfloat, mf, 8, 8, 16));
CHECK(TEST_MSG, int, 8, 8, PRIx8, expected_vrev16, "");
CHECK(TEST_MSG, uint, 8, 8, PRIx8, expected_vrev16, "");
@ -136,6 +165,7 @@ void exec_vrev (void)
CHECK(TEST_MSG, int, 8, 16, PRIx8, expected_vrev16, "");
CHECK(TEST_MSG, uint, 8, 16, PRIx8, expected_vrev16, "");
CHECK_POLY(TEST_MSG, poly, 8, 16, PRIx8, expected_vrev16, "");
MFLOAT8_ONLY(CHECK_FP(TEST_MSG, mfloat, 8, 8, PRIx8, expected_vrev16, ""));
#undef TEST_MSG
#define TEST_MSG "VREV32"
@ -145,12 +175,14 @@ void exec_vrev (void)
TEST_VREV(, uint, u, 16, 4, 32);
TEST_VREV(, poly, p, 8, 8, 32);
TEST_VREV(, poly, p, 16, 4, 32);
MFLOAT8_ONLY(TEST_VREV(, mfloat, mf, 8, 8, 32));
TEST_VREV(q, int, s, 8, 16, 32);
TEST_VREV(q, int, s, 16, 8, 32);
TEST_VREV(q, uint, u, 8, 16, 32);
TEST_VREV(q, uint, u, 16, 8, 32);
TEST_VREV(q, poly, p, 8, 16, 32);
TEST_VREV(q, poly, p, 16, 8, 32);
MFLOAT8_ONLY(TEST_VREV(q, mfloat, mf, 8, 16, 32));
CHECK(TEST_MSG, int, 8, 8, PRIx8, expected_vrev32, "");
CHECK(TEST_MSG, int, 16, 4, PRIx16, expected_vrev32, "");
@ -158,12 +190,14 @@ void exec_vrev (void)
CHECK(TEST_MSG, uint, 16, 4, PRIx16, expected_vrev32, "");
CHECK_POLY(TEST_MSG, poly, 8, 8, PRIx8, expected_vrev32, "");
CHECK_POLY(TEST_MSG, poly, 16, 4, PRIx16, expected_vrev32, "");
MFLOAT8_ONLY(CHECK_FP(TEST_MSG, mfloat, 8, 8, PRIx8, expected_vrev32, ""));
CHECK(TEST_MSG, int, 8, 16, PRIx8, expected_vrev32, "");
CHECK(TEST_MSG, int, 16, 8, PRIx16, expected_vrev32, "");
CHECK(TEST_MSG, uint, 8, 16, PRIx8, expected_vrev32, "");
CHECK(TEST_MSG, uint, 16, 8, PRIx16, expected_vrev32, "");
CHECK_POLY(TEST_MSG, poly, 8, 16, PRIx8, expected_vrev32, "");
CHECK_POLY(TEST_MSG, poly, 16, 8, PRIx16, expected_vrev32, "");
MFLOAT8_ONLY(CHECK_FP(TEST_MSG, mfloat, 8, 16, PRIx8, expected_vrev32, ""));
#undef TEST_MSG
#define TEST_MSG "VREV64"
@ -175,6 +209,7 @@ void exec_vrev (void)
TEST_VREV(, uint, u, 32, 2, 64);
TEST_VREV(, poly, p, 8, 8, 64);
TEST_VREV(, poly, p, 16, 4, 64);
MFLOAT8_ONLY(TEST_VREV(, mfloat, mf, 8, 8, 64));
TEST_VREV(q, int, s, 8, 16, 64);
TEST_VREV(q, int, s, 16, 8, 64);
TEST_VREV(q, int, s, 32, 4, 64);
@ -183,6 +218,7 @@ void exec_vrev (void)
TEST_VREV(q, uint, u, 32, 4, 64);
TEST_VREV(q, poly, p, 8, 16, 64);
TEST_VREV(q, poly, p, 16, 8, 64);
MFLOAT8_ONLY(TEST_VREV(q, mfloat, mf, 8, 16, 64));
CHECK(TEST_MSG, int, 8, 8, PRIx8, expected_vrev64, "");
CHECK(TEST_MSG, int, 16, 4, PRIx16, expected_vrev64, "");
@ -192,6 +228,7 @@ void exec_vrev (void)
CHECK(TEST_MSG, uint, 32, 2, PRIx32, expected_vrev64, "");
CHECK_POLY(TEST_MSG, poly, 8, 8, PRIx8, expected_vrev64, "");
CHECK_POLY(TEST_MSG, poly, 16, 4, PRIx16, expected_vrev64, "");
MFLOAT8_ONLY(CHECK_FP(TEST_MSG, mfloat, 8, 8, PRIx8, expected_vrev64, ""));
CHECK(TEST_MSG, int, 8, 16, PRIx8, expected_vrev64, "");
CHECK(TEST_MSG, int, 16, 8, PRIx16, expected_vrev64, "");
CHECK(TEST_MSG, int, 32, 4, PRIx32, expected_vrev64, "");
@ -200,6 +237,7 @@ void exec_vrev (void)
CHECK(TEST_MSG, uint, 32, 4, PRIx32, expected_vrev64, "");
CHECK_POLY(TEST_MSG, poly, 8, 16, PRIx8, expected_vrev64, "");
CHECK_POLY(TEST_MSG, poly, 16, 8, PRIx16, expected_vrev64, "");
MFLOAT8_ONLY(CHECK_FP(TEST_MSG, mfloat, 8, 16, PRIx8, expected_vrev64, ""));
#if defined (FP16_SUPPORTED)
TEST_VREV (, float, f, 16, 4, 64);

View file

@ -16,6 +16,10 @@ VECT_VAR_DECL(expected,uint,64,1) [] = { 0x88 };
VECT_VAR_DECL(expected,poly,8,8) [] = { 0xf0, 0xf1, 0xf2, 0xf3,
0xf4, 0xf5, 0x55, 0xf7 };
VECT_VAR_DECL(expected,poly,16,4) [] = { 0xfff0, 0xfff1, 0x66, 0xfff3 };
#if MFLOAT8_SUPPORTED
VECT_VAR_DECL(expected,hmfloat,8,8) [] = { 0xf0, 0xf1, 0xf2, 0xf3,
0xbb, 0xf5, 0xf6, 0xf7 };
#endif
VECT_VAR_DECL(expected,hfloat,16,4) [] = { 0xcc00, 0xcb80, 0x4840, 0xca80 };
VECT_VAR_DECL(expected,hfloat,32,2) [] = { 0xc1800000, 0x4204cccd };
VECT_VAR_DECL(expected,int,8,16) [] = { 0xf0, 0xf1, 0xf2, 0xf3,
@ -42,6 +46,12 @@ VECT_VAR_DECL(expected,poly,8,16) [] = { 0xf0, 0xf1, 0xf2, 0xf3,
0xfc, 0xfd, 0xdd, 0xff };
VECT_VAR_DECL(expected,poly,16,8) [] = { 0xfff0, 0xfff1, 0xfff2, 0xfff3,
0xfff4, 0xfff5, 0xee, 0xfff7 };
#if MFLOAT8_SUPPORTED
VECT_VAR_DECL(expected,hmfloat,8,16) [] = { 0xf0, 0xf1, 0xf2, 0xf3,
0xf4, 0xf5, 0xf6, 0xf7,
0xf8, 0xf9, 0xa0, 0xfb,
0xfc, 0xfd, 0xfe, 0xff };
#endif
VECT_VAR_DECL(expected,hfloat,16,8) [] = { 0xcc00, 0xcb80, 0xcb00, 0xca80,
0xca00, 0x4480, 0xc900, 0xc880 };
VECT_VAR_DECL(expected,hfloat,32,4) [] = { 0xc1800000, 0xc1700000,
@ -64,6 +74,10 @@ void exec_vset_lane (void)
/* Initialize input "vector" from "buffer". */
TEST_MACRO_ALL_VARIANTS_2_5(VLOAD, vector, buffer);
#if MFLOAT8_SUPPORTED
VLOAD (vector, buffer, , mfloat, mf, 8, 8);
VLOAD (vector, buffer, q, mfloat, mf, 8, 16);
#endif
#if defined (__ARM_FP16_FORMAT_IEEE) || defined (__ARM_FP16_FORMAT_ALTERNATIVE)
VLOAD(vector, buffer, , float, f, 16, 4);
VLOAD(vector, buffer, q, float, f, 16, 8);
@ -82,6 +96,7 @@ void exec_vset_lane (void)
TEST_VSET_LANE(, uint, u, 64, 1, 0x88, 0);
TEST_VSET_LANE(, poly, p, 8, 8, 0x55, 6);
TEST_VSET_LANE(, poly, p, 16, 4, 0x66, 2);
MFLOAT8_ONLY(TEST_VSET_LANE(, mfloat, mf, 8, 8, MFLOAT8(0xbb), 4));
#if defined (__ARM_FP16_FORMAT_IEEE) || defined (__ARM_FP16_FORMAT_ALTERNATIVE)
TEST_VSET_LANE(, float, f, 16, 4, 8.5f, 2);
#endif
@ -97,6 +112,7 @@ void exec_vset_lane (void)
TEST_VSET_LANE(q, uint, u, 64, 2, 0x11, 1);
TEST_VSET_LANE(q, poly, p, 8, 16, 0xDD, 14);
TEST_VSET_LANE(q, poly, p, 16, 8, 0xEE, 6);
MFLOAT8_ONLY(TEST_VSET_LANE(q, mfloat, mf, 8, 16, MFLOAT8(0xa0), 10));
#if defined (__ARM_FP16_FORMAT_IEEE) || defined (__ARM_FP16_FORMAT_ALTERNATIVE)
TEST_VSET_LANE(q, float, f, 16, 8, 4.5f, 5);
#endif

View file

@ -41,6 +41,7 @@ void FNNAME (INSN_NAME) (void)
DECL_VSHUFFLE(uint, 32, 2); \
DECL_VSHUFFLE(poly, 8, 8); \
DECL_VSHUFFLE(poly, 16, 4); \
MFLOAT8_ONLY(DECL_VSHUFFLE(mfloat, 8, 8)); \
DECL_VSHUFFLE(float, 32, 2); \
DECL_VSHUFFLE(int, 8, 16); \
DECL_VSHUFFLE(int, 16, 8); \
@ -50,6 +51,7 @@ void FNNAME (INSN_NAME) (void)
DECL_VSHUFFLE(uint, 32, 4); \
DECL_VSHUFFLE(poly, 8, 16); \
DECL_VSHUFFLE(poly, 16, 8); \
MFLOAT8_ONLY(DECL_VSHUFFLE(mfloat, 8, 16)); \
DECL_VSHUFFLE(float, 32, 4)
DECL_ALL_VSHUFFLE();
@ -60,6 +62,10 @@ void FNNAME (INSN_NAME) (void)
/* Initialize input "vector" from "buffer". */
TEST_MACRO_ALL_VARIANTS_2_5(VLOAD, vector1, buffer);
#if MFLOAT8_SUPPORTED
VLOAD (vector1, buffer, , mfloat, mf, 8, 8);
VLOAD (vector1, buffer, q, mfloat, mf, 8, 16);
#endif
#if defined (FP16_SUPPORTED)
VLOAD (vector1, buffer, , float, f, 16, 4);
VLOAD (vector1, buffer, q, float, f, 16, 8);
@ -76,6 +82,7 @@ void FNNAME (INSN_NAME) (void)
VDUP(vector2, , uint, u, 32, 2, 0x77);
VDUP(vector2, , poly, p, 8, 8, 0x55);
VDUP(vector2, , poly, p, 16, 4, 0x66);
MFLOAT8_ONLY(VDUP(vector2, , mfloat, mf, 8, 8, MFLOAT8(0xaa)));
#if defined (FP16_SUPPORTED)
VDUP (vector2, , float, f, 16, 4, 14.6f); /* 14.6f is 0x4b4d. */
#endif
@ -89,6 +96,7 @@ void FNNAME (INSN_NAME) (void)
VDUP(vector2, q, uint, u, 32, 4, 0x77);
VDUP(vector2, q, poly, p, 8, 16, 0x55);
VDUP(vector2, q, poly, p, 16, 8, 0x66);
MFLOAT8_ONLY(VDUP(vector2, q, mfloat, mf, 8, 16, MFLOAT8(0xbc)));
#if defined (FP16_SUPPORTED)
VDUP (vector2, q, float, f, 16, 8, 14.6f);
#endif
@ -103,6 +111,7 @@ void FNNAME (INSN_NAME) (void)
TEST_VSHUFFLE(INSN, , uint, u, 32, 2); \
TEST_VSHUFFLE(INSN, , poly, p, 8, 8); \
TEST_VSHUFFLE(INSN, , poly, p, 16, 4); \
MFLOAT8_ONLY(TEST_VSHUFFLE(INSN, , mfloat, mf, 8, 8)); \
TEST_VSHUFFLE(INSN, , float, f, 32, 2); \
TEST_VSHUFFLE(INSN, q, int, s, 8, 16); \
TEST_VSHUFFLE(INSN, q, int, s, 16, 8); \
@ -112,6 +121,7 @@ void FNNAME (INSN_NAME) (void)
TEST_VSHUFFLE(INSN, q, uint, u, 32, 4); \
TEST_VSHUFFLE(INSN, q, poly, p, 8, 16); \
TEST_VSHUFFLE(INSN, q, poly, p, 16, 8); \
MFLOAT8_ONLY(TEST_VSHUFFLE(INSN, q, mfloat, mf, 8, 16)); \
TEST_VSHUFFLE(INSN, q, float, f, 32, 4)
#define TEST_VSHUFFLE_FP16(INSN) \
@ -127,6 +137,7 @@ void FNNAME (INSN_NAME) (void)
TEST_EXTRA_CHUNK(uint, 32, 2, 1); \
TEST_EXTRA_CHUNK(poly, 8, 8, 1); \
TEST_EXTRA_CHUNK(poly, 16, 4, 1); \
MFLOAT8_ONLY(TEST_EXTRA_CHUNK(mfloat, 8, 8, 1)); \
TEST_EXTRA_CHUNK(float, 32, 2, 1); \
TEST_EXTRA_CHUNK(int, 8, 16, 1); \
TEST_EXTRA_CHUNK(int, 16, 8, 1); \
@ -136,6 +147,7 @@ void FNNAME (INSN_NAME) (void)
TEST_EXTRA_CHUNK(uint, 32, 4, 1); \
TEST_EXTRA_CHUNK(poly, 8, 16, 1); \
TEST_EXTRA_CHUNK(poly, 16, 8, 1); \
MFLOAT8_ONLY(TEST_EXTRA_CHUNK(mfloat, 8, 16, 1)); \
TEST_EXTRA_CHUNK(float, 32, 4, 1)
/* vshuffle support all vector types except [u]int64x1 and
@ -150,6 +162,7 @@ void FNNAME (INSN_NAME) (void)
CHECK(test_name, uint, 32, 2, PRIx32, EXPECTED, comment); \
CHECK_POLY(test_name, poly, 8, 8, PRIx8, EXPECTED, comment); \
CHECK_POLY(test_name, poly, 16, 4, PRIx16, EXPECTED, comment); \
MFLOAT8_ONLY(CHECK_FP(test_name, mfloat, 8, 8, PRIx8, EXPECTED, comment)); \
CHECK_FP(test_name, float, 32, 2, PRIx32, EXPECTED, comment); \
\
CHECK(test_name, int, 8, 16, PRIx8, EXPECTED, comment); \
@ -160,6 +173,7 @@ void FNNAME (INSN_NAME) (void)
CHECK(test_name, uint, 32, 4, PRIx32, EXPECTED, comment); \
CHECK_POLY(test_name, poly, 8, 16, PRIx8, EXPECTED, comment); \
CHECK_POLY(test_name, poly, 16, 8, PRIx16, EXPECTED, comment); \
MFLOAT8_ONLY(CHECK_FP(test_name, mfloat, 8, 16, PRIx8, EXPECTED, comment)); \
CHECK_FP(test_name, float, 32, 4, PRIx32, EXPECTED, comment); \
}

View file

@ -16,6 +16,10 @@ VECT_VAR_DECL(expected,uint,64,1) [] = { 0xfffffffffffffff0 };
VECT_VAR_DECL(expected,poly,8,8) [] = { 0xf6, 0x33, 0x33, 0x33,
0x33, 0x33, 0x33, 0x33 };
VECT_VAR_DECL(expected,poly,16,4) [] = { 0xfff2, 0x3333, 0x3333, 0x3333 };
#if MFLOAT8_SUPPORTED
VECT_VAR_DECL(expected,hmfloat,8,8) [] = { 0xf2, 0x33, 0x33, 0x33,
0x33, 0x33, 0x33, 0x33 };
#endif
VECT_VAR_DECL(expected,hfloat,16,4) [] = { 0xcb80, 0x3333, 0x3333, 0x3333 };
VECT_VAR_DECL(expected,hfloat,32,2) [] = { 0xc1700000, 0x33333333 };
VECT_VAR_DECL(expected,int,8,16) [] = { 0xff, 0x33, 0x33, 0x33,
@ -43,6 +47,12 @@ VECT_VAR_DECL(expected,poly,8,16) [] = { 0xfa, 0x33, 0x33, 0x33,
0x33, 0x33, 0x33, 0x33 };
VECT_VAR_DECL(expected,poly,16,8) [] = { 0xfff4, 0x3333, 0x3333, 0x3333,
0x3333, 0x3333, 0x3333, 0x3333 };
#if MFLOAT8_SUPPORTED
VECT_VAR_DECL(expected,hmfloat,8,16) [] = { 0xfe, 0x33, 0x33, 0x33,
0x33, 0x33, 0x33, 0x33,
0x33, 0x33, 0x33, 0x33,
0x33, 0x33, 0x33, 0x33 };
#endif
VECT_VAR_DECL(expected,hfloat,16,8) [] = { 0xc900, 0x3333, 0x3333, 0x3333,
0x3333, 0x3333, 0x3333, 0x3333 };
VECT_VAR_DECL(expected,hfloat,32,4) [] = { 0xc1700000, 0x33333333,
@ -72,6 +82,7 @@ void exec_vst1_lane (void)
TEST_VST1_LANE(, uint, u, 64, 1, 0);
TEST_VST1_LANE(, poly, p, 8, 8, 6);
TEST_VST1_LANE(, poly, p, 16, 4, 2);
MFLOAT8_ONLY(TEST_VST1_LANE(, mfloat, mf, 8, 8, 2));
#if defined (__ARM_FP16_FORMAT_IEEE) || defined (__ARM_FP16_FORMAT_ALTERNATIVE)
TEST_VST1_LANE(, float, f, 16, 4, 1);
#endif
@ -87,6 +98,7 @@ void exec_vst1_lane (void)
TEST_VST1_LANE(q, uint, u, 64, 2, 0);
TEST_VST1_LANE(q, poly, p, 8, 16, 10);
TEST_VST1_LANE(q, poly, p, 16, 8, 4);
MFLOAT8_ONLY(TEST_VST1_LANE(q, mfloat, mf, 8, 16, 14));
#if defined (__ARM_FP16_FORMAT_IEEE) || defined (__ARM_FP16_FORMAT_ALTERNATIVE)
TEST_VST1_LANE(q, float, f, 16, 8, 6);
#endif

View file

@ -17,14 +17,14 @@ test_vst1##SUFFIX##_x2 () \
BASE##x##ELTS##x##2##_t vectors; \
int i,j; \
for (i = 0; i < ELTS * 2; i++) \
data [i] = (BASE##_t) 2*i; \
data [i] = CONVERT (BASE##_t, 2*i); \
asm volatile ("" : : : "memory"); \
vectors.val[0] = vld1##SUFFIX (data); \
vectors.val[1] = vld1##SUFFIX (&data[ELTS]); \
vst1##SUFFIX##_x2 (temp, vectors); \
asm volatile ("" : : : "memory"); \
for (j = 0; j < ELTS * 2; j++) \
if (temp[j] != data[j]) \
if (!BITEQUAL (temp[j], data[j])) \
return 1; \
return 0; \
}
@ -57,6 +57,8 @@ VARIANT (float32, 4, q_f32)
#ifdef __aarch64__
#define VARIANTS(VARIANT) VARIANTS_1(VARIANT) \
VARIANT (mfloat8, 8, _mf8) \
VARIANT (mfloat8, 16, q_mf8) \
VARIANT (float64, 1, _f64) \
VARIANT (float64, 2, q_f64)
#else
@ -68,7 +70,7 @@ VARIANTS (TESTMETH)
#define CHECKS(BASE, ELTS, SUFFIX) \
if (test_vst1##SUFFIX##_x2 () != 0) \
fprintf (stderr, "test_vst1##SUFFIX##_x2");
fprintf (stderr, "test_vst1##SUFFIX##_x2"), __builtin_abort ();
int
main (int argc, char **argv)

View file

@ -17,7 +17,7 @@ test_vst1##SUFFIX##_x3 () \
BASE##x##ELTS##x##3##_t vectors; \
int i,j; \
for (i = 0; i < ELTS * 3; i++) \
data [i] = (BASE##_t) 3*i; \
data [i] = CONVERT (BASE##_t, 3*i); \
asm volatile ("" : : : "memory"); \
vectors.val[0] = vld1##SUFFIX (data); \
vectors.val[1] = vld1##SUFFIX (&data[ELTS]); \
@ -25,7 +25,7 @@ test_vst1##SUFFIX##_x3 () \
vst1##SUFFIX##_x3 (temp, vectors); \
asm volatile ("" : : : "memory"); \
for (j = 0; j < ELTS * 3; j++) \
if (temp[j] != data[j]) \
if (!BITEQUAL (temp[j], data[j])) \
return 1; \
return 0; \
}
@ -58,6 +58,8 @@ VARIANT (float32, 4, q_f32)
#ifdef __aarch64__
#define VARIANTS(VARIANT) VARIANTS_1(VARIANT) \
VARIANT (mfloat8, 8, _mf8) \
VARIANT (mfloat8, 16, q_mf8) \
VARIANT (float64, 1, _f64) \
VARIANT (float64, 2, q_f64)
#else
@ -69,7 +71,7 @@ VARIANTS (TESTMETH)
#define CHECKS(BASE, ELTS, SUFFIX) \
if (test_vst1##SUFFIX##_x3 () != 0) \
fprintf (stderr, "test_vst1##SUFFIX##_x3");
fprintf (stderr, "test_vst1##SUFFIX##_x3"), __builtin_abort ();
int
main (int argc, char **argv)

View file

@ -17,7 +17,7 @@ test_vst1##SUFFIX##_x4 () \
BASE##x##ELTS##x##4##_t vectors; \
int i,j; \
for (i = 0; i < ELTS * 4; i++) \
data [i] = (BASE##_t) 4*i; \
data [i] = CONVERT (BASE##_t, 4*i); \
asm volatile ("" : : : "memory"); \
vectors.val[0] = vld1##SUFFIX (data); \
vectors.val[1] = vld1##SUFFIX (&data[ELTS]); \
@ -26,7 +26,7 @@ test_vst1##SUFFIX##_x4 () \
vst1##SUFFIX##_x4 (temp, vectors); \
asm volatile ("" : : : "memory"); \
for (j = 0; j < ELTS * 4; j++) \
if (temp[j] != data[j]) \
if (!BITEQUAL (temp[j], data[j])) \
return 1; \
return 0; \
}
@ -61,6 +61,8 @@ VARIANT (float32, 4, q_f32)
#ifdef __aarch64__
#define VARIANTS(VARIANT) VARIANTS_1(VARIANT) \
VARIANT (mfloat8, 8, _mf8) \
VARIANT (mfloat8, 16, q_mf8) \
VARIANT (float64, 1, _f64) \
VARIANT (float64, 2, q_f64)
#else
@ -72,7 +74,7 @@ VARIANTS (TESTMETH)
#define CHECKS(BASE, ELTS, SUFFIX) \
if (test_vst1##SUFFIX##_x4 () != 0) \
fprintf (stderr, "test_vst1##SUFFIX##_x4");
fprintf (stderr, "test_vst1##SUFFIX##_x4"), __builtin_abort ();
int
main (int argc, char **argv)

View file

@ -14,6 +14,10 @@ VECT_VAR_DECL(expected_st2_0,uint,32,2) [] = { 0xfffffff0, 0xfffffff1 };
VECT_VAR_DECL(expected_st2_0,poly,8,8) [] = { 0xf0, 0xf1, 0x0, 0x0,
0x0, 0x0, 0x0, 0x0 };
VECT_VAR_DECL(expected_st2_0,poly,16,4) [] = { 0xfff0, 0xfff1, 0x0, 0x0 };
#if MFLOAT8_SUPPORTED
VECT_VAR_DECL(expected_st2_0,hmfloat,8,8) [] = { 0xf0, 0xf1, 0x0, 0x0,
0x0, 0x0, 0x0, 0x0 };
#endif
VECT_VAR_DECL(expected_st2_0,hfloat,16,4) [] = { 0xcc00, 0xcb80, 0x0, 0x0 };
VECT_VAR_DECL(expected_st2_0,hfloat,32,2) [] = { 0xc1800000, 0xc1700000 };
VECT_VAR_DECL(expected_st2_0,int,16,8) [] = { 0xfff0, 0xfff1, 0x0, 0x0,
@ -42,6 +46,10 @@ VECT_VAR_DECL(expected_st2_1,uint,32,2) [] = { 0x0, 0x0 };
VECT_VAR_DECL(expected_st2_1,poly,8,8) [] = { 0x0, 0x0, 0x0, 0x0,
0x0, 0x0, 0x0, 0x0 };
VECT_VAR_DECL(expected_st2_1,poly,16,4) [] = { 0x0, 0x0, 0x0, 0x0 };
#if MFLOAT8_SUPPORTED
VECT_VAR_DECL(expected_st2_1,hmfloat,8,8) [] = { 0x0, 0x0, 0x0, 0x0,
0x0, 0x0, 0x0, 0x0 };
#endif
VECT_VAR_DECL(expected_st2_1,hfloat,16,4) [] = { 0x0, 0x0, 0x0, 0x0 };
VECT_VAR_DECL(expected_st2_1,hfloat,32,2) [] = { 0x0, 0x0 };
VECT_VAR_DECL(expected_st2_1,int,16,8) [] = { 0x0, 0x0, 0x0, 0x0,
@ -68,6 +76,10 @@ VECT_VAR_DECL(expected_st3_0,uint,32,2) [] = { 0xfffffff0, 0xfffffff1 };
VECT_VAR_DECL(expected_st3_0,poly,8,8) [] = { 0xf0, 0xf1, 0xf2, 0x0,
0x0, 0x0, 0x0, 0x0 };
VECT_VAR_DECL(expected_st3_0,poly,16,4) [] = { 0xfff0, 0xfff1, 0xfff2, 0x0 };
#if MFLOAT8_SUPPORTED
VECT_VAR_DECL(expected_st3_0,hmfloat,8,8) [] = { 0xf0, 0xf1, 0xf2, 0x0,
0x0, 0x0, 0x0, 0x0 };
#endif
VECT_VAR_DECL(expected_st3_0,hfloat,16,4) [] = { 0xcc00, 0xcb80, 0xcb00, 0x0 };
VECT_VAR_DECL(expected_st3_0,hfloat,32,2) [] = { 0xc1800000, 0xc1700000 };
VECT_VAR_DECL(expected_st3_0,int,16,8) [] = { 0xfff0, 0xfff1, 0xfff2, 0x0,
@ -97,6 +109,10 @@ VECT_VAR_DECL(expected_st3_1,uint,32,2) [] = { 0xfffffff2, 0x0 };
VECT_VAR_DECL(expected_st3_1,poly,8,8) [] = { 0x0, 0x0, 0x0, 0x0,
0x0, 0x0, 0x0, 0x0 };
VECT_VAR_DECL(expected_st3_1,poly,16,4) [] = { 0x0, 0x0, 0x0, 0x0 };
#if MFLOAT8_SUPPORTED
VECT_VAR_DECL(expected_st3_1,hmfloat,8,8) [] = { 0x0, 0x0, 0x0, 0x0,
0x0, 0x0, 0x0, 0x0 };
#endif
VECT_VAR_DECL(expected_st3_1,hfloat,16,4) [] = { 0x0, 0x0, 0x0, 0x0 };
VECT_VAR_DECL(expected_st3_1,hfloat,32,2) [] = { 0xc1600000, 0x0 };
VECT_VAR_DECL(expected_st3_1,int,16,8) [] = { 0x0, 0x0, 0x0, 0x0,
@ -123,6 +139,10 @@ VECT_VAR_DECL(expected_st3_2,uint,32,2) [] = { 0x0, 0x0 };
VECT_VAR_DECL(expected_st3_2,poly,8,8) [] = { 0x0, 0x0, 0x0, 0x0,
0x0, 0x0, 0x0, 0x0 };
VECT_VAR_DECL(expected_st3_2,poly,16,4) [] = { 0x0, 0x0, 0x0, 0x0 };
#if MFLOAT8_SUPPORTED
VECT_VAR_DECL(expected_st3_2,hmfloat,8,8) [] = { 0x0, 0x0, 0x0, 0x0,
0x0, 0x0, 0x0, 0x0 };
#endif
VECT_VAR_DECL(expected_st3_2,hfloat,16,4) [] = { 0x0, 0x0, 0x0, 0x0 };
VECT_VAR_DECL(expected_st3_2,hfloat,32,2) [] = { 0x0, 0x0 };
VECT_VAR_DECL(expected_st3_2,int,16,8) [] = { 0x0, 0x0, 0x0, 0x0,
@ -149,6 +169,10 @@ VECT_VAR_DECL(expected_st4_0,uint,32,2) [] = { 0xfffffff0, 0xfffffff1 };
VECT_VAR_DECL(expected_st4_0,poly,8,8) [] = { 0xf0, 0xf1, 0xf2, 0xf3,
0x0, 0x0, 0x0, 0x0 };
VECT_VAR_DECL(expected_st4_0,poly,16,4) [] = { 0xfff0, 0xfff1, 0xfff2, 0xfff3 };
#if MFLOAT8_SUPPORTED
VECT_VAR_DECL(expected_st4_0,hmfloat,8,8) [] = { 0xf0, 0xf1, 0xf2, 0xf3,
0x0, 0x0, 0x0, 0x0 };
#endif
VECT_VAR_DECL(expected_st4_0,hfloat,16,4) [] = { 0xcc00, 0xcb80, 0xcb00, 0xca80 };
VECT_VAR_DECL(expected_st4_0,hfloat,32,2) [] = { 0xc1800000, 0xc1700000 };
VECT_VAR_DECL(expected_st4_0,int,16,8) [] = { 0xfff0, 0xfff1, 0xfff2, 0xfff3,
@ -178,6 +202,10 @@ VECT_VAR_DECL(expected_st4_1,uint,32,2) [] = { 0xfffffff2, 0xfffffff3 };
VECT_VAR_DECL(expected_st4_1,poly,8,8) [] = { 0x0, 0x0, 0x0, 0x0,
0x0, 0x0, 0x0, 0x0 };
VECT_VAR_DECL(expected_st4_1,poly,16,4) [] = { 0x0, 0x0, 0x0, 0x0 };
#if MFLOAT8_SUPPORTED
VECT_VAR_DECL(expected_st4_1,hmfloat,8,8) [] = { 0x0, 0x0, 0x0, 0x0,
0x0, 0x0, 0x0, 0x0 };
#endif
VECT_VAR_DECL(expected_st4_1,hfloat,16,4) [] = { 0x0, 0x0, 0x0, 0x0 };
VECT_VAR_DECL(expected_st4_1,hfloat,32,2) [] = { 0xc1600000, 0xc1500000 };
VECT_VAR_DECL(expected_st4_1,int,16,8) [] = { 0x0, 0x0, 0x0, 0x0,
@ -204,6 +232,10 @@ VECT_VAR_DECL(expected_st4_2,uint,32,2) [] = { 0x0, 0x0 };
VECT_VAR_DECL(expected_st4_2,poly,8,8) [] = { 0x0, 0x0, 0x0, 0x0,
0x0, 0x0, 0x0, 0x0 };
VECT_VAR_DECL(expected_st4_2,poly,16,4) [] = { 0x0, 0x0, 0x0, 0x0 };
#if MFLOAT8_SUPPORTED
VECT_VAR_DECL(expected_st4_2,hmfloat,8,8) [] = { 0x0, 0x0, 0x0, 0x0,
0x0, 0x0, 0x0, 0x0 };
#endif
VECT_VAR_DECL(expected_st4_2,hfloat,16,4) [] = { 0x0, 0x0, 0x0, 0x0 };
VECT_VAR_DECL(expected_st4_2,hfloat,32,2) [] = { 0x0, 0x0 };
VECT_VAR_DECL(expected_st4_2,int,16,8) [] = { 0x0, 0x0, 0x0, 0x0,
@ -230,6 +262,10 @@ VECT_VAR_DECL(expected_st4_3,uint,32,2) [] = { 0x0, 0x0 };
VECT_VAR_DECL(expected_st4_3,poly,8,8) [] = { 0x0, 0x0, 0x0, 0x0,
0x0, 0x0, 0x0, 0x0 };
VECT_VAR_DECL(expected_st4_3,poly,16,4) [] = { 0x0, 0x0, 0x0, 0x0 };
#if MFLOAT8_SUPPORTED
VECT_VAR_DECL(expected_st4_3,hmfloat,8,8) [] = { 0x0, 0x0, 0x0, 0x0,
0x0, 0x0, 0x0, 0x0 };
#endif
VECT_VAR_DECL(expected_st4_3,hfloat,16,4) [] = { 0x0, 0x0, 0x0, 0x0 };
VECT_VAR_DECL(expected_st4_3,hfloat,32,2) [] = { 0x0, 0x0 };
VECT_VAR_DECL(expected_st4_3,int,16,8) [] = { 0x0, 0x0, 0x0, 0x0,
@ -256,6 +292,9 @@ VECT_VAR_DECL_INIT(buffer_vld2_lane, uint, 32, 2);
VECT_VAR_DECL_INIT(buffer_vld2_lane, uint, 64, 2);
VECT_VAR_DECL_INIT(buffer_vld2_lane, poly, 8, 2);
VECT_VAR_DECL_INIT(buffer_vld2_lane, poly, 16, 2);
#if MFLOAT8_SUPPORTED
VECT_VAR_DECL(buffer_vld2_lane, mfloat, 8, 2)[2];
#endif
#if defined (__ARM_FP16_FORMAT_IEEE) || defined (__ARM_FP16_FORMAT_ALTERNATIVE)
VECT_VAR_DECL_INIT(buffer_vld2_lane, float, 16, 2);
#endif
@ -272,6 +311,9 @@ VECT_VAR_DECL_INIT(buffer_vld3_lane, uint, 32, 3);
VECT_VAR_DECL_INIT(buffer_vld3_lane, uint, 64, 3);
VECT_VAR_DECL_INIT(buffer_vld3_lane, poly, 8, 3);
VECT_VAR_DECL_INIT(buffer_vld3_lane, poly, 16, 3);
#if MFLOAT8_SUPPORTED
VECT_VAR_DECL(buffer_vld3_lane, mfloat, 8, 3)[3];
#endif
#if defined (__ARM_FP16_FORMAT_IEEE) || defined (__ARM_FP16_FORMAT_ALTERNATIVE)
VECT_VAR_DECL_INIT(buffer_vld3_lane, float, 16, 3);
#endif
@ -288,6 +330,9 @@ VECT_VAR_DECL_INIT(buffer_vld4_lane, uint, 32, 4);
VECT_VAR_DECL_INIT(buffer_vld4_lane, uint, 64, 4);
VECT_VAR_DECL_INIT(buffer_vld4_lane, poly, 8, 4);
VECT_VAR_DECL_INIT(buffer_vld4_lane, poly, 16, 4);
#if MFLOAT8_SUPPORTED
VECT_VAR_DECL(buffer_vld4_lane, mfloat, 8, 4)[4];
#endif
#if defined (__ARM_FP16_FORMAT_IEEE) || defined (__ARM_FP16_FORMAT_ALTERNATIVE)
VECT_VAR_DECL_INIT(buffer_vld4_lane, float, 16, 4);
#endif
@ -347,6 +392,7 @@ void exec_vstX_lane (void)
DECL_VSTX_LANE(uint, 32, 2, X); \
DECL_VSTX_LANE(poly, 8, 8, X); \
DECL_VSTX_LANE(poly, 16, 4, X); \
MFLOAT8_ONLY(DECL_VSTX_LANE(mfloat, 8, 8, X);) \
DECL_VSTX_LANE(float, 32, 2, X); \
DECL_VSTX_LANE(int, 16, 8, X); \
DECL_VSTX_LANE(int, 32, 4, X); \
@ -378,6 +424,7 @@ void exec_vstX_lane (void)
TEST_VSTX_LANE(, uint, u, 32, 2, X, 1); \
TEST_VSTX_LANE(, poly, p, 8, 8, X, 4); \
TEST_VSTX_LANE(, poly, p, 16, 4, X, 3); \
MFLOAT8_ONLY(TEST_VSTX_LANE(, mfloat, mf, 8, 8, X, 5)); \
TEST_VSTX_LANE(q, int, s, 16, 8, X, 6); \
TEST_VSTX_LANE(q, int, s, 32, 4, X, 2); \
TEST_VSTX_LANE(q, uint, u, 16, 8, X, 5); \
@ -403,6 +450,7 @@ void exec_vstX_lane (void)
TEST_EXTRA_CHUNK(uint, 32, 2, X, Y); \
TEST_EXTRA_CHUNK(poly, 8, 8, X, Y); \
TEST_EXTRA_CHUNK(poly, 16, 4, X, Y); \
MFLOAT8_ONLY(TEST_EXTRA_CHUNK(mfloat, 8, 8, X, Y)); \
TEST_EXTRA_CHUNK(float, 32, 2, X, Y); \
TEST_EXTRA_CHUNK(int, 16, 8, X, Y); \
TEST_EXTRA_CHUNK(int, 32, 4, X, Y); \
@ -420,6 +468,15 @@ void exec_vstX_lane (void)
#define TEST_ALL_EXTRA_CHUNKS(X,Y) TEST_ALL_EXTRA_CHUNKS_NO_FP16(X, Y)
#endif
#if MFLOAT8_SUPPORTED
__builtin_memcpy (VECT_VAR(buffer_vld2_lane, mfloat, 8, 2),
VECT_VAR(buffer_vld2_lane, int, 8, 2), 2);
__builtin_memcpy (VECT_VAR(buffer_vld3_lane, mfloat, 8, 3),
VECT_VAR(buffer_vld3_lane, int, 8, 3), 3);
__builtin_memcpy (VECT_VAR(buffer_vld4_lane, mfloat, 8, 4),
VECT_VAR(buffer_vld4_lane, int, 8, 4), 4);
#endif
/* Declare the temporary buffers / variables. */
DECL_ALL_VSTX_LANE(2);
DECL_ALL_VSTX_LANE(3);
@ -434,6 +491,9 @@ void exec_vstX_lane (void)
DUMMY_ARRAY(buffer_src, uint, 32, 2, 4);
DUMMY_ARRAY(buffer_src, poly, 8, 8, 4);
DUMMY_ARRAY(buffer_src, poly, 16, 4, 4);
#if MFLOAT8_SUPPORTED
DUMMY_ARRAY(buffer_src, mfloat, 8, 8, 4);
#endif
#if defined (__ARM_FP16_FORMAT_IEEE) || defined (__ARM_FP16_FORMAT_ALTERNATIVE)
DUMMY_ARRAY(buffer_src, float, 16, 4, 4);
#endif
@ -462,6 +522,7 @@ void exec_vstX_lane (void)
CHECK(TEST_MSG, uint, 32, 2, PRIx32, expected_st2_0, CMT);
CHECK_POLY(TEST_MSG, poly, 8, 8, PRIx8, expected_st2_0, CMT);
CHECK_POLY(TEST_MSG, poly, 16, 4, PRIx16, expected_st2_0, CMT);
MFLOAT8_ONLY(CHECK_FP(TEST_MSG, mfloat, 8, 8, PRIx8, expected_st2_0, CMT));
CHECK_FP(TEST_MSG, float, 32, 2, PRIx32, expected_st2_0, CMT);
CHECK(TEST_MSG, int, 16, 8, PRIx16, expected_st2_0, CMT);
CHECK(TEST_MSG, int, 32, 4, PRIx32, expected_st2_0, CMT);
@ -485,6 +546,7 @@ void exec_vstX_lane (void)
CHECK(TEST_MSG, uint, 32, 2, PRIx32, expected_st2_1, CMT);
CHECK_POLY(TEST_MSG, poly, 8, 8, PRIx8, expected_st2_1, CMT);
CHECK_POLY(TEST_MSG, poly, 16, 4, PRIx16, expected_st2_1, CMT);
MFLOAT8_ONLY(CHECK_FP(TEST_MSG, mfloat, 8, 8, PRIx8, expected_st2_1, CMT));
CHECK_FP(TEST_MSG, float, 32, 2, PRIx32, expected_st2_1, CMT);
CHECK(TEST_MSG, int, 16, 8, PRIx16, expected_st2_1, CMT);
CHECK(TEST_MSG, int, 32, 4, PRIx32, expected_st2_1, CMT);
@ -514,6 +576,7 @@ void exec_vstX_lane (void)
CHECK(TEST_MSG, uint, 32, 2, PRIx32, expected_st3_0, CMT);
CHECK_POLY(TEST_MSG, poly, 8, 8, PRIx8, expected_st3_0, CMT);
CHECK_POLY(TEST_MSG, poly, 16, 4, PRIx16, expected_st3_0, CMT);
MFLOAT8_ONLY(CHECK_FP(TEST_MSG, mfloat, 8, 8, PRIx8, expected_st3_0, CMT));
CHECK_FP(TEST_MSG, float, 32, 2, PRIx32, expected_st3_0, CMT);
CHECK(TEST_MSG, int, 16, 8, PRIx16, expected_st3_0, CMT);
CHECK(TEST_MSG, int, 32, 4, PRIx32, expected_st3_0, CMT);
@ -538,6 +601,7 @@ void exec_vstX_lane (void)
CHECK(TEST_MSG, uint, 32, 2, PRIx32, expected_st3_1, CMT);
CHECK_POLY(TEST_MSG, poly, 8, 8, PRIx8, expected_st3_1, CMT);
CHECK_POLY(TEST_MSG, poly, 16, 4, PRIx16, expected_st3_1, CMT);
MFLOAT8_ONLY(CHECK_FP(TEST_MSG, mfloat, 8, 8, PRIx8, expected_st3_1, CMT));
CHECK_FP(TEST_MSG, float, 32, 2, PRIx32, expected_st3_1, CMT);
CHECK(TEST_MSG, int, 16, 8, PRIx16, expected_st3_1, CMT);
CHECK(TEST_MSG, int, 32, 4, PRIx32, expected_st3_1, CMT);
@ -562,6 +626,7 @@ void exec_vstX_lane (void)
CHECK(TEST_MSG, uint, 32, 2, PRIx32, expected_st3_2, CMT);
CHECK_POLY(TEST_MSG, poly, 8, 8, PRIx8, expected_st3_2, CMT);
CHECK_POLY(TEST_MSG, poly, 16, 4, PRIx16, expected_st3_2, CMT);
MFLOAT8_ONLY(CHECK_FP(TEST_MSG, mfloat, 8, 8, PRIx8, expected_st3_2, CMT));
CHECK_FP(TEST_MSG, float, 32, 2, PRIx32, expected_st3_2, CMT);
CHECK(TEST_MSG, int, 16, 8, PRIx16, expected_st3_2, CMT);
CHECK(TEST_MSG, int, 32, 4, PRIx32, expected_st3_2, CMT);
@ -591,6 +656,7 @@ void exec_vstX_lane (void)
CHECK(TEST_MSG, uint, 32, 2, PRIx32, expected_st4_0, CMT);
CHECK_POLY(TEST_MSG, poly, 8, 8, PRIx8, expected_st4_0, CMT);
CHECK_POLY(TEST_MSG, poly, 16, 4, PRIx16, expected_st4_0, CMT);
MFLOAT8_ONLY(CHECK_FP(TEST_MSG, mfloat, 8, 8, PRIx8, expected_st4_0, CMT));
CHECK_FP(TEST_MSG, float, 32, 2, PRIx32, expected_st4_0, CMT);
CHECK(TEST_MSG, int, 16, 8, PRIx16, expected_st4_0, CMT);
CHECK(TEST_MSG, int, 32, 4, PRIx32, expected_st4_0, CMT);
@ -615,6 +681,7 @@ void exec_vstX_lane (void)
CHECK(TEST_MSG, uint, 32, 2, PRIx32, expected_st4_1, CMT);
CHECK_POLY(TEST_MSG, poly, 8, 8, PRIx8, expected_st4_1, CMT);
CHECK_POLY(TEST_MSG, poly, 16, 4, PRIx16, expected_st4_1, CMT);
MFLOAT8_ONLY(CHECK_FP(TEST_MSG, mfloat, 8, 8, PRIx8, expected_st4_1, CMT));
CHECK_FP(TEST_MSG, float, 32, 2, PRIx32, expected_st4_1, CMT);
CHECK(TEST_MSG, int, 16, 8, PRIx16, expected_st4_1, CMT);
CHECK(TEST_MSG, int, 32, 4, PRIx32, expected_st4_1, CMT);
@ -639,6 +706,7 @@ void exec_vstX_lane (void)
CHECK(TEST_MSG, uint, 32, 2, PRIx32, expected_st4_2, CMT);
CHECK_POLY(TEST_MSG, poly, 8, 8, PRIx8, expected_st4_2, CMT);
CHECK_POLY(TEST_MSG, poly, 16, 4, PRIx16, expected_st4_2, CMT);
MFLOAT8_ONLY(CHECK_FP(TEST_MSG, mfloat, 8, 8, PRIx8, expected_st4_2, CMT));
CHECK_FP(TEST_MSG, float, 32, 2, PRIx32, expected_st4_2, CMT);
CHECK(TEST_MSG, int, 16, 8, PRIx16, expected_st4_2, CMT);
CHECK(TEST_MSG, int, 32, 4, PRIx32, expected_st4_2, CMT);
@ -663,6 +731,7 @@ void exec_vstX_lane (void)
CHECK(TEST_MSG, uint, 32, 2, PRIx32, expected_st4_3, CMT);
CHECK_POLY(TEST_MSG, poly, 8, 8, PRIx8, expected_st4_3, CMT);
CHECK_POLY(TEST_MSG, poly, 16, 4, PRIx16, expected_st4_3, CMT);
MFLOAT8_ONLY(CHECK_FP(TEST_MSG, mfloat, 8, 8, PRIx8, expected_st4_3, CMT));
CHECK_FP(TEST_MSG, float, 32, 2, PRIx32, expected_st4_3, CMT);
CHECK(TEST_MSG, int, 16, 8, PRIx16, expected_st4_3, CMT);
CHECK(TEST_MSG, int, 32, 4, PRIx32, expected_st4_3, CMT);

View file

@ -9,6 +9,10 @@ VECT_VAR_DECL(expected_vtbl1,uint,8,8) [] = { 0x0, 0xf3, 0xf3, 0xf3,
0x0, 0x0, 0xf3, 0xf3 };
VECT_VAR_DECL(expected_vtbl1,poly,8,8) [] = { 0x0, 0xf3, 0xf3, 0xf3,
0x0, 0x0, 0xf3, 0xf3 };
#if MFLOAT8_SUPPORTED
VECT_VAR_DECL(expected_vtbl1,hmfloat,8,8) [] = { 0x0, 0xf3, 0xf3, 0xf3,
0x0, 0x0, 0xf3, 0xf3 };
#endif
/* Expected results for vtbl2. */
VECT_VAR_DECL(expected_vtbl2,int,8,8) [] = { 0xf6, 0xf3, 0xf3, 0xf3,
@ -17,6 +21,10 @@ VECT_VAR_DECL(expected_vtbl2,uint,8,8) [] = { 0xf6, 0xf5, 0xf5, 0xf5,
0x0, 0x0, 0xf5, 0xf5 };
VECT_VAR_DECL(expected_vtbl2,poly,8,8) [] = { 0xf6, 0xf5, 0xf5, 0xf5,
0x0, 0x0, 0xf5, 0xf5 };
#if MFLOAT8_SUPPORTED
VECT_VAR_DECL(expected_vtbl2,hmfloat,8,8) [] = { 0xf6, 0xf5, 0xf5, 0xf5,
0x0, 0x0, 0xf5, 0xf5 };
#endif
/* Expected results for vtbl3. */
VECT_VAR_DECL(expected_vtbl3,int,8,8) [] = { 0xf8, 0xf4, 0xf4, 0xf4,
@ -25,6 +33,10 @@ VECT_VAR_DECL(expected_vtbl3,uint,8,8) [] = { 0xf8, 0xf7, 0xf7, 0xf7,
0xff, 0x0, 0xf7, 0xf7 };
VECT_VAR_DECL(expected_vtbl3,poly,8,8) [] = { 0xf8, 0xf7, 0xf7, 0xf7,
0xff, 0x0, 0xf7, 0xf7 };
#if MFLOAT8_SUPPORTED
VECT_VAR_DECL(expected_vtbl3,hmfloat,8,8) [] = { 0xf8, 0xf7, 0xf7, 0xf7,
0xff, 0x0, 0xf7, 0xf7 };
#endif
/* Expected results for vtbl4. */
VECT_VAR_DECL(expected_vtbl4,int,8,8) [] = { 0xfa, 0xf5, 0xf5, 0xf5,
@ -33,6 +45,10 @@ VECT_VAR_DECL(expected_vtbl4,uint,8,8) [] = { 0xfa, 0xf9, 0xf9, 0xf9,
0x3, 0x0, 0xf9, 0xf9 };
VECT_VAR_DECL(expected_vtbl4,poly,8,8) [] = { 0xfa, 0xf9, 0xf9, 0xf9,
0x3, 0x0, 0xf9, 0xf9 };
#if MFLOAT8_SUPPORTED
VECT_VAR_DECL(expected_vtbl4,hmfloat,8,8) [] = { 0xfa, 0xf9, 0xf9, 0xf9,
0x3, 0x0, 0xf9, 0xf9 };
#endif
/* Expected results for vtbx1. */
VECT_VAR_DECL(expected_vtbx1,int,8,8) [] = { 0x33, 0xf2, 0xf2, 0xf2,
@ -41,6 +57,10 @@ VECT_VAR_DECL(expected_vtbx1,uint,8,8) [] = { 0xcc, 0xf3, 0xf3, 0xf3,
0xcc, 0xcc, 0xf3, 0xf3 };
VECT_VAR_DECL(expected_vtbx1,poly,8,8) [] = { 0xcc, 0xf3, 0xf3, 0xf3,
0xcc, 0xcc, 0xf3, 0xf3 };
#if MFLOAT8_SUPPORTED
VECT_VAR_DECL(expected_vtbx1,hmfloat,8,8) [] = { 0x55, 0xf3, 0xf3, 0xf3,
0x55, 0x55, 0xf3, 0xf3 };
#endif
/* Expected results for vtbx2. */
VECT_VAR_DECL(expected_vtbx2,int,8,8) [] = { 0xf6, 0xf3, 0xf3, 0xf3,
@ -49,6 +69,10 @@ VECT_VAR_DECL(expected_vtbx2,uint,8,8) [] = { 0xf6, 0xf5, 0xf5, 0xf5,
0xcc, 0xcc, 0xf5, 0xf5 };
VECT_VAR_DECL(expected_vtbx2,poly,8,8) [] = { 0xf6, 0xf5, 0xf5, 0xf5,
0xcc, 0xcc, 0xf5, 0xf5 };
#if MFLOAT8_SUPPORTED
VECT_VAR_DECL(expected_vtbx2,hmfloat,8,8) [] = { 0xf6, 0xf5, 0xf5, 0xf5,
0x55, 0x55, 0xf5, 0xf5 };
#endif
/* Expected results for vtbx3. */
VECT_VAR_DECL(expected_vtbx3,int,8,8) [] = { 0xf8, 0xf4, 0xf4, 0xf4,
@ -57,6 +81,10 @@ VECT_VAR_DECL(expected_vtbx3,uint,8,8) [] = { 0xf8, 0xf7, 0xf7, 0xf7,
0xff, 0xcc, 0xf7, 0xf7 };
VECT_VAR_DECL(expected_vtbx3,poly,8,8) [] = { 0xf8, 0xf7, 0xf7, 0xf7,
0xff, 0xcc, 0xf7, 0xf7 };
#if MFLOAT8_SUPPORTED
VECT_VAR_DECL(expected_vtbx3,hmfloat,8,8) [] = { 0xf8, 0xf7, 0xf7, 0xf7,
0xff, 0x55, 0xf7, 0xf7 };
#endif
/* Expected results for vtbx4. */
VECT_VAR_DECL(expected_vtbx4,int,8,8) [] = { 0xfa, 0xf5, 0xf5, 0xf5,
@ -65,6 +93,10 @@ VECT_VAR_DECL(expected_vtbx4,uint,8,8) [] = { 0xfa, 0xf9, 0xf9, 0xf9,
0x3, 0xcc, 0xf9, 0xf9 };
VECT_VAR_DECL(expected_vtbx4,poly,8,8) [] = { 0xfa, 0xf9, 0xf9, 0xf9,
0x3, 0xcc, 0xf9, 0xf9 };
#if MFLOAT8_SUPPORTED
VECT_VAR_DECL(expected_vtbx4,hmfloat,8,8) [] = { 0xfa, 0xf9, 0xf9, 0xf9,
0x3, 0x55, 0xf9, 0xf9 };
#endif
void exec_vtbX (void)
{
@ -105,32 +137,38 @@ void exec_vtbX (void)
DECL_VARIABLE(vector_res, int, 8, 8);
DECL_VARIABLE(vector_res, uint, 8, 8);
DECL_VARIABLE(vector_res, poly, 8, 8);
MFLOAT8_ONLY(DECL_VARIABLE(vector_res, mfloat, 8, 8));
/* For vtbl1. */
DECL_VARIABLE(table_vector, int, 8, 8);
DECL_VARIABLE(table_vector, uint, 8, 8);
DECL_VARIABLE(table_vector, poly, 8, 8);
MFLOAT8_ONLY(DECL_VARIABLE(table_vector, mfloat, 8, 8));
/* For vtbx*. */
DECL_VARIABLE(default_vector, int, 8, 8);
DECL_VARIABLE(default_vector, uint, 8, 8);
DECL_VARIABLE(default_vector, poly, 8, 8);
MFLOAT8_ONLY(DECL_VARIABLE(default_vector, mfloat, 8, 8));
/* We need only 8 bits variants. */
#define DECL_ALL_VTBLX(X) \
DECL_VTBX(int, 8, 8, X); \
DECL_VTBX(uint, 8, 8, X); \
DECL_VTBX(poly, 8, 8, X)
DECL_VTBX(poly, 8, 8, X); \
MFLOAT8_ONLY(DECL_VTBX(mfloat, 8, 8, X))
#define TEST_ALL_VTBL1() \
TEST_VTBL1(int, s, int, 8, 8); \
TEST_VTBL1(uint, u, uint, 8, 8); \
TEST_VTBL1(poly, p, uint, 8, 8)
TEST_VTBL1(poly, p, uint, 8, 8); \
MFLOAT8_ONLY(TEST_VTBL1(mfloat, mf, uint, 8, 8))
#define TEST_ALL_VTBLX(X) \
TEST_VTBLX(int, s, int, 8, 8, X); \
TEST_VTBLX(uint, u, uint, 8, 8, X); \
TEST_VTBLX(poly, p, uint, 8, 8, X)
TEST_VTBLX(poly, p, uint, 8, 8, X); \
MFLOAT8_ONLY(TEST_VTBLX(mfloat, mf, uint, 8, 8, X))
/* Declare the temporary buffers / variables. */
DECL_ALL_VTBLX(2);
@ -168,6 +206,7 @@ void exec_vtbX (void)
CHECK(TEST_MSG, int, 8, 8, PRIx8, expected_vtbl1, "");
CHECK(TEST_MSG, uint, 8, 8, PRIx8, expected_vtbl1, "");
CHECK_POLY(TEST_MSG, poly, 8, 8, PRIx8, expected_vtbl1, "");
MFLOAT8_ONLY(CHECK_FP(TEST_MSG, mfloat, 8, 8, PRIx8, expected_vtbl1, ""));
/* Check vtbl2. */
clean_results ();
@ -178,6 +217,7 @@ void exec_vtbX (void)
CHECK(TEST_MSG, int, 8, 8, PRIx8, expected_vtbl2, "");
CHECK(TEST_MSG, uint, 8, 8, PRIx8, expected_vtbl2, "");
CHECK_POLY(TEST_MSG, poly, 8, 8, PRIx8, expected_vtbl2, "");
MFLOAT8_ONLY(CHECK_FP(TEST_MSG, mfloat, 8, 8, PRIx8, expected_vtbl2, ""));
/* Check vtbl3. */
clean_results ();
@ -188,6 +228,7 @@ void exec_vtbX (void)
CHECK(TEST_MSG, int, 8, 8, PRIx8, expected_vtbl3, "");
CHECK(TEST_MSG, uint, 8, 8, PRIx8, expected_vtbl3, "");
CHECK_POLY(TEST_MSG, poly, 8, 8, PRIx8, expected_vtbl3, "");
MFLOAT8_ONLY(CHECK_FP(TEST_MSG, mfloat, 8, 8, PRIx8, expected_vtbl3, ""));
/* Check vtbl4. */
clean_results ();
@ -198,6 +239,7 @@ void exec_vtbX (void)
CHECK(TEST_MSG, int, 8, 8, PRIx8, expected_vtbl4, "");
CHECK(TEST_MSG, uint, 8, 8, PRIx8, expected_vtbl4, "");
CHECK_POLY(TEST_MSG, poly, 8, 8, PRIx8, expected_vtbl4, "");
MFLOAT8_ONLY(CHECK_FP(TEST_MSG, mfloat, 8, 8, PRIx8, expected_vtbl4, ""));
/* Now test VTBX. */
@ -229,17 +271,20 @@ void exec_vtbX (void)
#define TEST_ALL_VTBX1() \
TEST_VTBX1(int, s, int, 8, 8); \
TEST_VTBX1(uint, u, uint, 8, 8); \
TEST_VTBX1(poly, p, uint, 8, 8)
TEST_VTBX1(poly, p, uint, 8, 8); \
MFLOAT8_ONLY(TEST_VTBX1(mfloat, mf, uint, 8, 8))
#define TEST_ALL_VTBXX(X) \
TEST_VTBXX(int, s, int, 8, 8, X); \
TEST_VTBXX(uint, u, uint, 8, 8, X); \
TEST_VTBXX(poly, p, uint, 8, 8, X)
TEST_VTBXX(poly, p, uint, 8, 8, X); \
MFLOAT8_ONLY(TEST_VTBXX(mfloat, mf, uint, 8, 8, X))
/* Choose init value arbitrarily, will be used as default value. */
VDUP(default_vector, , int, s, 8, 8, 0x33);
VDUP(default_vector, , uint, u, 8, 8, 0xCC);
VDUP(default_vector, , poly, p, 8, 8, 0xCC);
MFLOAT8_ONLY(VDUP(default_vector, , mfloat, mf, 8, 8, MFLOAT8(0x55)));
/* Check vtbx1. */
clean_results ();
@ -250,6 +295,7 @@ void exec_vtbX (void)
CHECK(TEST_MSG, int, 8, 8, PRIx8, expected_vtbx1, "");
CHECK(TEST_MSG, uint, 8, 8, PRIx8, expected_vtbx1, "");
CHECK_POLY(TEST_MSG, poly, 8, 8, PRIx8, expected_vtbx1, "");
MFLOAT8_ONLY(CHECK_FP(TEST_MSG, mfloat, 8, 8, PRIx8, expected_vtbx1, ""));
/* Check vtbx2. */
clean_results ();
@ -260,6 +306,7 @@ void exec_vtbX (void)
CHECK(TEST_MSG, int, 8, 8, PRIx8, expected_vtbx2, "");
CHECK(TEST_MSG, uint, 8, 8, PRIx8, expected_vtbx2, "");
CHECK_POLY(TEST_MSG, poly, 8, 8, PRIx8, expected_vtbx2, "");
MFLOAT8_ONLY(CHECK_FP(TEST_MSG, mfloat, 8, 8, PRIx8, expected_vtbx2, ""));
/* Check vtbx3. */
clean_results ();
@ -270,6 +317,7 @@ void exec_vtbX (void)
CHECK(TEST_MSG, int, 8, 8, PRIx8, expected_vtbx3, "");
CHECK(TEST_MSG, uint, 8, 8, PRIx8, expected_vtbx3, "");
CHECK_POLY(TEST_MSG, poly, 8, 8, PRIx8, expected_vtbx3, "");
MFLOAT8_ONLY(CHECK_FP(TEST_MSG, mfloat, 8, 8, PRIx8, expected_vtbx3, ""));
/* Check vtbx4. */
clean_results ();
@ -280,6 +328,7 @@ void exec_vtbX (void)
CHECK(TEST_MSG, int, 8, 8, PRIx8, expected_vtbx4, "");
CHECK(TEST_MSG, uint, 8, 8, PRIx8, expected_vtbx4, "");
CHECK_POLY(TEST_MSG, poly, 8, 8, PRIx8, expected_vtbx4, "");
MFLOAT8_ONLY(CHECK_FP(TEST_MSG, mfloat, 8, 8, PRIx8, expected_vtbx4, ""));
}
int main (void)

View file

@ -15,6 +15,10 @@ VECT_VAR_DECL(expected0,uint,32,2) [] = { 0xfffffff0, 0xfffffff1 };
VECT_VAR_DECL(expected0,poly,8,8) [] = { 0xf0, 0xf1, 0x55, 0x55,
0xf2, 0xf3, 0x55, 0x55 };
VECT_VAR_DECL(expected0,poly,16,4) [] = { 0xfff0, 0xfff1, 0x66, 0x66 };
#if MFLOAT8_SUPPORTED
VECT_VAR_DECL(expected0,hmfloat,8,8) [] = { 0xf0, 0xf1, 0xaa, 0xaa,
0xf2, 0xf3, 0xaa, 0xaa };
#endif
#if defined (FP16_SUPPORTED)
VECT_VAR_DECL (expected0, hfloat, 16, 4) [] = { 0xcc00, 0xcb80,
0x4b4d, 0x4b4d };
@ -40,6 +44,12 @@ VECT_VAR_DECL(expected0,poly,8,16) [] = { 0xf0, 0xf1, 0x55, 0x55,
0xf6, 0xf7, 0x55, 0x55 };
VECT_VAR_DECL(expected0,poly,16,8) [] = { 0xfff0, 0xfff1, 0x66, 0x66,
0xfff2, 0xfff3, 0x66, 0x66 };
#if MFLOAT8_SUPPORTED
VECT_VAR_DECL(expected0,hmfloat,8,16) [] = { 0xf0, 0xf1, 0xbc, 0xbc,
0xf2, 0xf3, 0xbc, 0xbc,
0xf4, 0xf5, 0xbc, 0xbc,
0xf6, 0xf7, 0xbc, 0xbc };
#endif
#if defined (FP16_SUPPORTED)
VECT_VAR_DECL (expected0, hfloat, 16, 8) [] = { 0xcc00, 0xcb80,
0x4b4d, 0x4b4d,
@ -61,6 +71,10 @@ VECT_VAR_DECL(expected1,uint,32,2) [] = { 0x77, 0x77 };
VECT_VAR_DECL(expected1,poly,8,8) [] = { 0xf4, 0xf5, 0x55, 0x55,
0xf6, 0xf7, 0x55, 0x55 };
VECT_VAR_DECL(expected1,poly,16,4) [] = { 0xfff2, 0xfff3, 0x66, 0x66 };
#if MFLOAT8_SUPPORTED
VECT_VAR_DECL(expected1,hmfloat,8,8) [] = { 0xf4, 0xf5, 0xaa, 0xaa,
0xf6, 0xf7, 0xaa, 0xaa };
#endif
#if defined (FP16_SUPPORTED)
VECT_VAR_DECL (expected1, hfloat, 16, 4) [] = { 0xcb00, 0xca80,
0x4b4d, 0x4b4d };
@ -86,6 +100,12 @@ VECT_VAR_DECL(expected1,poly,8,16) [] = { 0xf8, 0xf9, 0x55, 0x55,
0xfe, 0xff, 0x55, 0x55 };
VECT_VAR_DECL(expected1,poly,16,8) [] = { 0xfff4, 0xfff5, 0x66, 0x66,
0xfff6, 0xfff7, 0x66, 0x66 };
#if MFLOAT8_SUPPORTED
VECT_VAR_DECL(expected1,hmfloat,8,16) [] = { 0xf8, 0xf9, 0xbc, 0xbc,
0xfa, 0xfb, 0xbc, 0xbc,
0xfc, 0xfd, 0xbc, 0xbc,
0xfe, 0xff, 0xbc, 0xbc };
#endif
#if defined (FP16_SUPPORTED)
VECT_VAR_DECL (expected1, hfloat, 16, 8) [] = { 0xca00, 0xc980,
0x4b4d, 0x4b4d,

View file

@ -20,6 +20,10 @@ VECT_VAR_DECL(expected,poly,8,8) [] = { 0xf0, 0x55, 0xf2, 0x55,
0xf4, 0x55, 0xf6, 0x55 };
VECT_VAR_DECL(expected,poly,16,4) [] = { 0xfff0, 0x66, 0xfff2, 0x66 };
VECT_VAR_DECL(expected,hfloat,32,2) [] = { 0xc1800000, 0x42066666 };
#if MFLOAT8_SUPPORTED
VECT_VAR_DECL(expected,hmfloat,8,8) [] = { 0xf0, 0x29, 0xf2, 0x29,
0xf4, 0x29, 0xf6, 0x29 };
#endif
#if defined (FP16_SUPPORTED)
VECT_VAR_DECL (expected, hfloat, 16, 4) [] = { 0xcc00, 0x4b4d,
0xcb00, 0x4b4d };
@ -50,6 +54,12 @@ VECT_VAR_DECL(expected,poly,8,16) [] = { 0xf0, 0x55, 0xf2, 0x55,
0xfc, 0x55, 0xfe, 0x55 };
VECT_VAR_DECL(expected,poly,16,8) [] = { 0xfff0, 0x66, 0xfff2, 0x66,
0xfff4, 0x66, 0xfff6, 0x66 };
#if MFLOAT8_SUPPORTED
VECT_VAR_DECL(expected,hmfloat,8,16) [] = { 0xf0, 0xea, 0xf2, 0xea,
0xf4, 0xea, 0xf6, 0xea,
0xf8, 0xea, 0xfa, 0xea,
0xfc, 0xea, 0xfe, 0xea };
#endif
#if defined (FP16_SUPPORTED)
VECT_VAR_DECL (expected, hfloat, 16, 8) [] = { 0xcc00, 0x4b4d,
0xcb00, 0x4b4d,
@ -82,6 +92,10 @@ void exec_vtrn_half (void)
CLEAN(expected, uint, 64, 1);
TEST_MACRO_ALL_VARIANTS_2_5(VLOAD, vector, buffer);
#if MFLOAT8_SUPPORTED
VLOAD(vector, buffer, , mfloat, mf, 8, 8);
VLOAD(vector, buffer, q, mfloat, mf, 8, 16);
#endif
#if defined (FP16_SUPPORTED)
VLOAD(vector, buffer, , float, f, 16, 4);
VLOAD(vector, buffer, q, float, f, 16, 8);
@ -99,6 +113,7 @@ void exec_vtrn_half (void)
VDUP(vector2, , uint, u, 32, 2, 0x77);
VDUP(vector2, , poly, p, 8, 8, 0x55);
VDUP(vector2, , poly, p, 16, 4, 0x66);
MFLOAT8_ONLY(VDUP(vector2, , mfloat, mf, 8, 8, MFLOAT8(0x29)));
#if defined (FP16_SUPPORTED)
VDUP (vector2, , float, f, 16, 4, 14.6f); /* 14.6f is 0x4b4d. */
#endif
@ -114,6 +129,7 @@ void exec_vtrn_half (void)
VDUP(vector2, q, uint, u, 64, 2, 0x88);
VDUP(vector2, q, poly, p, 8, 16, 0x55);
VDUP(vector2, q, poly, p, 16, 8, 0x66);
MFLOAT8_ONLY(VDUP(vector2, q, mfloat, mf, 8, 16, MFLOAT8(0xea)));
#if defined (FP16_SUPPORTED)
VDUP (vector2, q, float, f, 16, 8, 14.6f);
#endif
@ -128,6 +144,7 @@ void exec_vtrn_half (void)
TEST_VTRN1(, uint, u, 32, 2);
TEST_VTRN1(, poly, p, 8, 8);
TEST_VTRN1(, poly, p, 16, 4);
MFLOAT8_ONLY(TEST_VTRN1(, mfloat, mf, 8, 8));
#if defined (FP16_SUPPORTED)
TEST_VTRN1(, float, f, 16, 4);
#endif
@ -143,6 +160,7 @@ void exec_vtrn_half (void)
TEST_VTRN1(q, uint, u, 64, 2);
TEST_VTRN1(q, poly, p, 8, 16);
TEST_VTRN1(q, poly, p, 16, 8);
MFLOAT8_ONLY(TEST_VTRN1(q, mfloat, mf, 8, 16));
#if defined (FP16_SUPPORTED)
TEST_VTRN1(q, float, f, 16, 8);
#endif
@ -174,6 +192,10 @@ VECT_VAR_DECL(expected2,uint,64,1) [] = { 0xfffffffffffffff1 };
VECT_VAR_DECL(expected2,poly,8,8) [] = { 0xf1, 0x55, 0xf3, 0x55,
0xf5, 0x55, 0xf7, 0x55 };
VECT_VAR_DECL(expected2,poly,16,4) [] = { 0xfff1, 0x66, 0xfff3, 0x66 };
#if MFLOAT8_SUPPORTED
VECT_VAR_DECL(expected2,hmfloat,8,8) [] = { 0xf1, 0x29, 0xf3, 0x29,
0xf5, 0x29, 0xf7, 0x29 };
#endif
VECT_VAR_DECL(expected2,hfloat,32,2) [] = { 0xc1700000, 0x42066666 };
#if defined (FP16_SUPPORTED)
VECT_VAR_DECL (expected2, hfloat, 16, 4) [] = { 0xcb80, 0x4b4d,
@ -205,6 +227,12 @@ VECT_VAR_DECL(expected2,poly,8,16) [] = { 0xf1, 0x55, 0xf3, 0x55,
0xfd, 0x55, 0xff, 0x55 };
VECT_VAR_DECL(expected2,poly,16,8) [] = { 0xfff1, 0x66, 0xfff3, 0x66,
0xfff5, 0x66, 0xfff7, 0x66 };
#if MFLOAT8_SUPPORTED
VECT_VAR_DECL(expected2,hmfloat,8,16) [] = { 0xf1, 0xea, 0xf3, 0xea,
0xf5, 0xea, 0xf7, 0xea,
0xf9, 0xea, 0xfb, 0xea,
0xfd, 0xea, 0xff, 0xea };
#endif
#if defined (FP16_SUPPORTED)
VECT_VAR_DECL (expected2, hfloat, 16, 8) [] = { 0xcb80, 0x4b4d,
0xca80, 0x4b4d,
@ -225,6 +253,7 @@ VECT_VAR_DECL(expected2,hfloat,32,4) [] = { 0xc1700000, 0x42073333,
TEST_VTRN2(, uint, u, 32, 2);
TEST_VTRN2(, poly, p, 8, 8);
TEST_VTRN2(, poly, p, 16, 4);
MFLOAT8_ONLY(TEST_VTRN2(, mfloat, mf, 8, 8));
#if defined (FP16_SUPPORTED)
TEST_VTRN2(, float, f, 16, 4);
#endif
@ -240,6 +269,7 @@ VECT_VAR_DECL(expected2,hfloat,32,4) [] = { 0xc1700000, 0x42073333,
TEST_VTRN2(q, uint, u, 64, 2);
TEST_VTRN2(q, poly, p, 8, 16);
TEST_VTRN2(q, poly, p, 16, 8);
MFLOAT8_ONLY(TEST_VTRN2(q, mfloat, mf, 8, 16));
#if defined (FP16_SUPPORTED)
TEST_VTRN2(q, float, f, 16, 8);
#endif

View file

@ -19,6 +19,10 @@ VECT_VAR_DECL(expected0,poly,8,8) [] = { 0xf0, 0xf1, 0xf2, 0xf3,
0xf4, 0xf5, 0xf6, 0xf7 };
VECT_VAR_DECL(expected0,poly,16,4) [] = { 0xfff0, 0xfff1,
0xfff2, 0xfff3 };
#if MFLOAT8_SUPPORTED
VECT_VAR_DECL(expected0,hmfloat,8,8) [] = { 0xf0, 0xf1, 0xf2, 0xf3,
0xf4, 0xf5, 0xf6, 0xf7 };
#endif
#if defined (FP16_SUPPORTED)
VECT_VAR_DECL (expected0, hfloat, 16, 4) [] = { 0xcc00, 0xcb80,
0xcb00, 0xca80 };
@ -52,6 +56,12 @@ VECT_VAR_DECL(expected0,poly,16,8) [] = { 0xfff0, 0xfff1,
0xfff2, 0xfff3,
0xfff4, 0xfff5,
0xfff6, 0xfff7 };
#if MFLOAT8_SUPPORTED
VECT_VAR_DECL(expected0,hmfloat,8,16) [] = { 0xf0, 0xf1, 0xf2, 0xf3,
0xf4, 0xf5, 0xf6, 0xf7,
0xf8, 0xf9, 0xfa, 0xfb,
0xfc, 0xfd, 0xfe, 0xff };
#endif
#if defined (FP16_SUPPORTED)
VECT_VAR_DECL (expected0, hfloat, 16, 8) [] = { 0xcc00, 0xcb80,
0xcb00, 0xca80,
@ -73,6 +83,10 @@ VECT_VAR_DECL(expected1,uint,32,2) [] = { 0x77, 0x77 };
VECT_VAR_DECL(expected1,poly,8,8) [] = { 0x55, 0x55, 0x55, 0x55,
0x55, 0x55, 0x55, 0x55 };
VECT_VAR_DECL(expected1,poly,16,4) [] = { 0x66, 0x66, 0x66, 0x66 };
#if MFLOAT8_SUPPORTED
VECT_VAR_DECL(expected1,hmfloat,8,8) [] = { 0xaa, 0xaa, 0xaa, 0xaa,
0xaa, 0xaa, 0xaa, 0xaa };
#endif
#if defined (FP16_SUPPORTED)
VECT_VAR_DECL (expected1, hfloat, 16, 4) [] = { 0x4b4d, 0x4b4d,
0x4b4d, 0x4b4d };
@ -98,6 +112,12 @@ VECT_VAR_DECL(expected1,poly,8,16) [] = { 0x55, 0x55, 0x55, 0x55,
0x55, 0x55, 0x55, 0x55 };
VECT_VAR_DECL(expected1,poly,16,8) [] = { 0x66, 0x66, 0x66, 0x66,
0x66, 0x66, 0x66, 0x66 };
#if MFLOAT8_SUPPORTED
VECT_VAR_DECL(expected1,hmfloat,8,16) [] = { 0xbc, 0xbc, 0xbc, 0xbc,
0xbc, 0xbc, 0xbc, 0xbc,
0xbc, 0xbc, 0xbc, 0xbc,
0xbc, 0xbc, 0xbc, 0xbc };
#endif
#if defined (FP16_SUPPORTED)
VECT_VAR_DECL (expected1, hfloat, 16, 8) [] = { 0x4b4d, 0x4b4d,
0x4b4d, 0x4b4d,

View file

@ -19,6 +19,10 @@ VECT_VAR_DECL(expected,uint,64,1) [] = { 0xfffffffffffffff0 };
VECT_VAR_DECL(expected,poly,8,8) [] = { 0xf0, 0xf2, 0xf4, 0xf6,
0x55, 0x55, 0x55, 0x55 };
VECT_VAR_DECL(expected,poly,16,4) [] = { 0xfff0, 0xfff2, 0x66, 0x66 };
#if MFLOAT8_SUPPORTED
VECT_VAR_DECL(expected,hmfloat,8,8) [] = { 0xf0, 0xf2, 0xf4, 0xf6,
0x7b, 0x7b, 0x7b, 0x7b };
#endif
VECT_VAR_DECL(expected,hfloat,32,2) [] = { 0xc1800000, 0x42066666 };
#if defined (FP16_SUPPORTED)
VECT_VAR_DECL (expected, hfloat, 16, 4) [] = { 0xcc00, 0xcb00,
@ -49,6 +53,12 @@ VECT_VAR_DECL(expected,poly,8,16) [] = { 0xf0, 0xf2, 0xf4, 0xf6,
0x55, 0x55, 0x55, 0x55 };
VECT_VAR_DECL(expected,poly,16,8) [] = { 0xfff0, 0xfff2, 0xfff4, 0xfff6,
0x66, 0x66, 0x66, 0x66 };
#if MFLOAT8_SUPPORTED
VECT_VAR_DECL(expected,hmfloat,8,16) [] = { 0xf0, 0xf2, 0xf4, 0xf6,
0xf8, 0xfa, 0xfc, 0xfe,
0x92, 0x92, 0x92, 0x92,
0x92, 0x92, 0x92, 0x92 };
#endif
#if defined (FP16_SUPPORTED)
VECT_VAR_DECL (expected, hfloat, 16, 8) [] = { 0xcc00, 0xcb00, 0xca00, 0xc900,
0x4b4d, 0x4b4d, 0x4b4d, 0x4b4d };
@ -79,6 +89,10 @@ void exec_vuzp_half (void)
CLEAN(expected, uint, 64, 1);
TEST_MACRO_ALL_VARIANTS_2_5(VLOAD, vector, buffer);
#if MFLOAT8_SUPPORTED
VLOAD(vector, buffer, , mfloat, mf, 8, 8);
VLOAD(vector, buffer, q, mfloat, mf, 8, 16);
#endif
#if defined (FP16_SUPPORTED)
VLOAD(vector, buffer, , float, f, 16, 4);
VLOAD(vector, buffer, q, float, f, 16, 8);
@ -96,6 +110,7 @@ void exec_vuzp_half (void)
VDUP(vector2, , uint, u, 32, 2, 0x77);
VDUP(vector2, , poly, p, 8, 8, 0x55);
VDUP(vector2, , poly, p, 16, 4, 0x66);
MFLOAT8_ONLY(VDUP(vector2, , mfloat, mf, 8, 8, MFLOAT8(0x7b)));
#if defined (FP16_SUPPORTED)
VDUP (vector2, , float, f, 16, 4, 14.6f); /* 14.6f is 0x4b4d. */
#endif
@ -111,6 +126,7 @@ void exec_vuzp_half (void)
VDUP(vector2, q, uint, u, 64, 2, 0x88);
VDUP(vector2, q, poly, p, 8, 16, 0x55);
VDUP(vector2, q, poly, p, 16, 8, 0x66);
MFLOAT8_ONLY(VDUP(vector2, q, mfloat, mf, 8, 16, MFLOAT8(0x92)));
#if defined (FP16_SUPPORTED)
VDUP (vector2, q, float, f, 16, 8, 14.6f);
#endif
@ -125,6 +141,7 @@ void exec_vuzp_half (void)
TEST_VUZP1(, uint, u, 32, 2);
TEST_VUZP1(, poly, p, 8, 8);
TEST_VUZP1(, poly, p, 16, 4);
MFLOAT8_ONLY(TEST_VUZP1(, mfloat, mf, 8, 8));
#if defined (FP16_SUPPORTED)
TEST_VUZP1(, float, f, 16, 4);
#endif
@ -140,6 +157,7 @@ void exec_vuzp_half (void)
TEST_VUZP1(q, uint, u, 64, 2);
TEST_VUZP1(q, poly, p, 8, 16);
TEST_VUZP1(q, poly, p, 16, 8);
MFLOAT8_ONLY(TEST_VUZP1(q, mfloat, mf, 8, 16));
#if defined (FP16_SUPPORTED)
TEST_VUZP1(q, float, f, 16, 8);
#endif
@ -171,6 +189,10 @@ VECT_VAR_DECL(expected2,uint,64,1) [] = { 0xfffffffffffffff1 };
VECT_VAR_DECL(expected2,poly,8,8) [] = { 0xf1, 0xf3, 0xf5, 0xf7,
0x55, 0x55, 0x55, 0x55 };
VECT_VAR_DECL(expected2,poly,16,4) [] = { 0xfff1, 0xfff3, 0x66, 0x66 };
#if MFLOAT8_SUPPORTED
VECT_VAR_DECL(expected2,hmfloat,8,8) [] = { 0xf1, 0xf3, 0xf5, 0xf7,
0x7b, 0x7b, 0x7b, 0x7b };
#endif
VECT_VAR_DECL(expected2,hfloat,32,2) [] = { 0xc1700000, 0x42066666 };
#if defined (FP16_SUPPORTED)
VECT_VAR_DECL (expected2, hfloat, 16, 4) [] = { 0xcb80, 0xca80,
@ -201,6 +223,12 @@ VECT_VAR_DECL(expected2,poly,8,16) [] = { 0xf1, 0xf3, 0xf5, 0xf7,
0x55, 0x55, 0x55, 0x55 };
VECT_VAR_DECL(expected2,poly,16,8) [] = { 0xfff1, 0xfff3, 0xfff5, 0xfff7,
0x66, 0x66, 0x66, 0x66 };
#if MFLOAT8_SUPPORTED
VECT_VAR_DECL(expected2,hmfloat,8,16) [] = { 0xf1, 0xf3, 0xf5, 0xf7,
0xf9, 0xfb, 0xfd, 0xff,
0x92, 0x92, 0x92, 0x92,
0x92, 0x92, 0x92, 0x92 };
#endif
#if defined (FP16_SUPPORTED)
VECT_VAR_DECL (expected2, hfloat, 16, 8) [] = { 0xcb80, 0xca80, 0xc980, 0xc880,
0x4b4d, 0x4b4d, 0x4b4d, 0x4b4d
@ -221,6 +249,7 @@ VECT_VAR_DECL(expected2,hfloat,32,4) [] = { 0xc1700000, 0xc1500000,
TEST_VUZP2(, uint, u, 32, 2);
TEST_VUZP2(, poly, p, 8, 8);
TEST_VUZP2(, poly, p, 16, 4);
MFLOAT8_ONLY(TEST_VUZP2(, mfloat, mf, 8, 8));
#if defined (FP16_SUPPORTED)
TEST_VUZP2(, float, f, 16, 4);
#endif
@ -236,6 +265,7 @@ VECT_VAR_DECL(expected2,hfloat,32,4) [] = { 0xc1700000, 0xc1500000,
TEST_VUZP2(q, uint, u, 64, 2);
TEST_VUZP2(q, poly, p, 8, 16);
TEST_VUZP2(q, poly, p, 16, 8);
MFLOAT8_ONLY(TEST_VUZP2(q, mfloat, mf, 8, 16));
#if defined (FP16_SUPPORTED)
TEST_VUZP2(q, float, f, 16, 8);
#endif

View file

@ -18,6 +18,10 @@ VECT_VAR_DECL(expected0,poly,8,8) [] = { 0xf0, 0xf4, 0x55, 0x55,
0xf1, 0xf5, 0x55, 0x55 };
VECT_VAR_DECL(expected0,poly,16,4) [] = { 0xfff0, 0xfff2,
0x66, 0x66 };
#if MFLOAT8_SUPPORTED
VECT_VAR_DECL(expected0,hmfloat,8,8) [] = { 0xf0, 0xf4, 0xaa, 0xaa,
0xf1, 0xf5, 0xaa, 0xaa };
#endif
#if defined (FP16_SUPPORTED)
VECT_VAR_DECL (expected0, hfloat, 16, 4) [] = { 0xcc00, 0xcb00,
0x4b4d, 0x4b4d };
@ -45,6 +49,12 @@ VECT_VAR_DECL(expected0,poly,8,16) [] = { 0xf0, 0xf8, 0x55, 0x55,
0xf3, 0xfb, 0x55, 0x55 };
VECT_VAR_DECL(expected0,poly,16,8) [] = { 0xfff0, 0xfff4, 0x66, 0x66,
0xfff1, 0xfff5, 0x66, 0x66 };
#if MFLOAT8_SUPPORTED
VECT_VAR_DECL(expected0,hmfloat,8,16) [] = { 0xf0, 0xf8, 0xbc, 0xbc,
0xf1, 0xf9, 0xbc, 0xbc,
0xf2, 0xfa, 0xbc, 0xbc,
0xf3, 0xfb, 0xbc, 0xbc };
#endif
#if defined (FP16_SUPPORTED)
VECT_VAR_DECL (expected0, hfloat, 16, 8) [] = { 0xcc00, 0xca00,
0x4b4d, 0x4b4d,
@ -69,6 +79,10 @@ VECT_VAR_DECL(expected1,poly,8,8) [] = { 0xf2, 0xf6, 0x55, 0x55,
0xf3, 0xf7, 0x55, 0x55 };
VECT_VAR_DECL(expected1,poly,16,4) [] = { 0xfff1, 0xfff3,
0x66, 0x66 };
#if MFLOAT8_SUPPORTED
VECT_VAR_DECL(expected1,hmfloat,8,8) [] = { 0xf2, 0xf6, 0xaa, 0xaa,
0xf3, 0xf7, 0xaa, 0xaa };
#endif
#if defined (FP16_SUPPORTED)
VECT_VAR_DECL (expected1, hfloat, 16, 4) [] = { 0xcb80, 0xca80,
0x4b4d, 0x4b4d };
@ -96,6 +110,12 @@ VECT_VAR_DECL(expected1,poly,8,16) [] = { 0xf4, 0xfc, 0x55, 0x55,
0xf7, 0xff, 0x55, 0x55 };
VECT_VAR_DECL(expected1,poly,16,8) [] = { 0xfff2, 0xfff6, 0x66, 0x66,
0xfff3, 0xfff7, 0x66, 0x66 };
#if MFLOAT8_SUPPORTED
VECT_VAR_DECL(expected1,hmfloat,8,16) [] = { 0xf4, 0xfc, 0xbc, 0xbc,
0xf5, 0xfd, 0xbc, 0xbc,
0xf6, 0xfe, 0xbc, 0xbc,
0xf7, 0xff, 0xbc, 0xbc };
#endif
#if defined (FP16_SUPPORTED)
VECT_VAR_DECL (expected1, hfloat, 16, 8) [] = { 0xcb00, 0xc900,
0x4b4d, 0x4b4d,

View file

@ -20,6 +20,10 @@ VECT_VAR_DECL(expected,poly,8,8) [] = { 0xf0, 0x55, 0xf1, 0x55,
0xf2, 0x55, 0xf3, 0x55 };
VECT_VAR_DECL(expected,poly,16,4) [] = { 0xfff0, 0x66, 0xfff1, 0x66 };
VECT_VAR_DECL(expected,hfloat,32,2) [] = { 0xc1800000, 0x42066666 };
#if MFLOAT8_SUPPORTED
VECT_VAR_DECL(expected,hmfloat,8,8) [] = { 0xf0, 0xf9, 0xf1, 0xf9,
0xf2, 0xf9, 0xf3, 0xf9 };
#endif
#if defined (FP16_SUPPORTED)
VECT_VAR_DECL (expected, hfloat, 16, 4) [] = { 0xcc00, 0x4b4d,
0xcb80, 0x4b4d };
@ -50,6 +54,12 @@ VECT_VAR_DECL(expected,poly,8,16) [] = { 0xf0, 0x55, 0xf1, 0x55,
0xf6, 0x55, 0xf7, 0x55 };
VECT_VAR_DECL(expected,poly,16,8) [] = { 0xfff0, 0x66, 0xfff1, 0x66,
0xfff2, 0x66, 0xfff3, 0x66 };
#if MFLOAT8_SUPPORTED
VECT_VAR_DECL(expected,hmfloat,8,16) [] = { 0xf0, 0xd6, 0xf1, 0xd6,
0xf2, 0xd6, 0xf3, 0xd6,
0xf4, 0xd6, 0xf5, 0xd6,
0xf6, 0xd6, 0xf7, 0xd6 };
#endif
#if defined (FP16_SUPPORTED)
VECT_VAR_DECL (expected, hfloat, 16, 8) [] = { 0xcc00, 0x4b4d,
0xcb80, 0x4b4d,
@ -82,6 +92,10 @@ void exec_vzip_half (void)
CLEAN(expected, uint, 64, 1);
TEST_MACRO_ALL_VARIANTS_2_5(VLOAD, vector, buffer);
#if MFLOAT8_SUPPORTED
VLOAD(vector, buffer, , mfloat, mf, 8, 8);
VLOAD(vector, buffer, q, mfloat, mf, 8, 16);
#endif
#if defined (FP16_SUPPORTED)
VLOAD(vector, buffer, , float, f, 16, 4);
VLOAD(vector, buffer, q, float, f, 16, 8);
@ -99,6 +113,7 @@ void exec_vzip_half (void)
VDUP(vector2, , uint, u, 32, 2, 0x77);
VDUP(vector2, , poly, p, 8, 8, 0x55);
VDUP(vector2, , poly, p, 16, 4, 0x66);
MFLOAT8_ONLY(VDUP(vector2, , mfloat, mf, 8, 8, MFLOAT8(0xf9)));
#if defined (FP16_SUPPORTED)
VDUP (vector2, , float, f, 16, 4, 14.6f); /* 14.6f is 0x4b4d. */
#endif
@ -114,6 +129,7 @@ void exec_vzip_half (void)
VDUP(vector2, q, uint, u, 64, 2, 0x88);
VDUP(vector2, q, poly, p, 8, 16, 0x55);
VDUP(vector2, q, poly, p, 16, 8, 0x66);
MFLOAT8_ONLY(VDUP(vector2, q, mfloat, mf, 8, 16, MFLOAT8(0xd6)));
#if defined (FP16_SUPPORTED)
VDUP (vector2, q, float, f, 16, 8, 14.6f);
#endif
@ -128,6 +144,7 @@ void exec_vzip_half (void)
TEST_VZIP1(, uint, u, 32, 2);
TEST_VZIP1(, poly, p, 8, 8);
TEST_VZIP1(, poly, p, 16, 4);
MFLOAT8_ONLY(TEST_VZIP1(, mfloat, mf, 8, 8));
#if defined (FP16_SUPPORTED)
TEST_VZIP1(, float, f, 16, 4);
#endif
@ -143,6 +160,7 @@ void exec_vzip_half (void)
TEST_VZIP1(q, uint, u, 64, 2);
TEST_VZIP1(q, poly, p, 8, 16);
TEST_VZIP1(q, poly, p, 16, 8);
MFLOAT8_ONLY(TEST_VZIP1(q, mfloat, mf, 8, 16));
#if defined (FP16_SUPPORTED)
TEST_VZIP1(q, float, f, 16, 8);
#endif
@ -175,6 +193,10 @@ VECT_VAR_DECL(expected2,poly,8,8) [] = { 0xf4, 0x55, 0xf5, 0x55,
0xf6, 0x55, 0xf7, 0x55 };
VECT_VAR_DECL(expected2,poly,16,4) [] = { 0xfff2, 0x66, 0xfff3, 0x66 };
VECT_VAR_DECL(expected2,hfloat,32,2) [] = { 0xc1700000, 0x42066666 };
#if MFLOAT8_SUPPORTED
VECT_VAR_DECL(expected2,hmfloat,8,8) [] = { 0xf4, 0xf9, 0xf5, 0xf9,
0xf6, 0xf9, 0xf7, 0xf9 };
#endif
#if defined (FP16_SUPPORTED)
VECT_VAR_DECL (expected2, hfloat, 16, 4) [] = { 0xcb00, 0x4b4d,
0xca80, 0x4b4d };
@ -205,6 +227,12 @@ VECT_VAR_DECL(expected2,poly,8,16) [] = { 0xf8, 0x55, 0xf9, 0x55,
0xfe, 0x55, 0xff, 0x55 };
VECT_VAR_DECL(expected2,poly,16,8) [] = { 0xfff4, 0x66, 0xfff5, 0x66,
0xfff6, 0x66, 0xfff7, 0x66 };
#if MFLOAT8_SUPPORTED
VECT_VAR_DECL(expected2,hmfloat,8,16) [] = { 0xf8, 0xd6, 0xf9, 0xd6,
0xfa, 0xd6, 0xfb, 0xd6,
0xfc, 0xd6, 0xfd, 0xd6,
0xfe, 0xd6, 0xff, 0xd6 };
#endif
#if defined (FP16_SUPPORTED)
VECT_VAR_DECL (expected2, hfloat, 16, 8) [] = { 0xca00, 0x4b4d,
0xc980, 0x4b4d,
@ -225,6 +253,7 @@ VECT_VAR_DECL(expected2,hfloat,32,4) [] = { 0xc1600000, 0x42073333,
TEST_VZIP2(, uint, u, 32, 2);
TEST_VZIP2(, poly, p, 8, 8);
TEST_VZIP2(, poly, p, 16, 4);
MFLOAT8_ONLY(TEST_VZIP2(, mfloat, mf, 8, 8));
#if defined (FP16_SUPPORTED)
TEST_VZIP2(, float, f, 16, 4);
#endif
@ -240,6 +269,7 @@ VECT_VAR_DECL(expected2,hfloat,32,4) [] = { 0xc1600000, 0x42073333,
TEST_VZIP2(q, uint, u, 64, 2);
TEST_VZIP2(q, poly, p, 8, 16);
TEST_VZIP2(q, poly, p, 16, 8);
MFLOAT8_ONLY(TEST_VZIP2(q, mfloat, mf, 8, 16));
#if defined (FP16_SUPPORTED)
TEST_VZIP2(q, float, f, 16, 8);
#endif

View file

@ -196,6 +196,70 @@ test_vluti2q_laneqp8(poly8x16_t a, uint8x16_t b, poly8x16_t results[])
results[3] = vluti2q_laneq_p8(a, b, 3);
}
/*
** test_vluti2_lanemf8:
** luti2 v[0-9]+\.16b, {v[0-9]+\.16b}, v[0-9]+\[0\]
** luti2 v[0-9]+\.16b, {v[0-9]+\.16b}, v[0-9]+\[1\]
** ...
** ret
*/
void
test_vluti2_lanemf8(mfloat8x8_t a, uint8x8_t b, mfloat8x16_t results[])
{
results[0] = vluti2_lane_mf8(a, b, 0);
results[1] = vluti2_lane_mf8(a, b, 1);
}
/*
** test_vluti2_laneqmf8:
** luti2 v[0-9]+\.16b, {v[0-9]+\.16b}, v[0-9]+\[0\]
** luti2 v[0-9]+\.16b, {v[0-9]+\.16b}, v[0-9]+\[1\]
** luti2 v[0-9]+\.16b, {v[0-9]+\.16b}, v[0-9]+\[2\]
** luti2 v[0-9]+\.16b, {v[0-9]+\.16b}, v[0-9]+\[3\]
** ...
** ret
*/
void
test_vluti2_laneqmf8(mfloat8x8_t a, uint8x16_t b, mfloat8x16_t results[])
{
results[0] = vluti2_laneq_mf8(a, b, 0);
results[1] = vluti2_laneq_mf8(a, b, 1);
results[2] = vluti2_laneq_mf8(a, b, 2);
results[3] = vluti2_laneq_mf8(a, b, 3);
}
/*
** test_vluti2q_lanemf8:
** luti2 v[0-9]+\.16b, {v[0-9]+\.16b}, v[0-9]+\[0\]
** luti2 v[0-9]+\.16b, {v[0-9]+\.16b}, v[0-9]+\[1\]
** ...
** ret
*/
void
test_vluti2q_lanemf8(mfloat8x16_t a, uint8x8_t b, mfloat8x16_t results[])
{
results[0] = vluti2q_lane_mf8(a, b, 0);
results[1] = vluti2q_lane_mf8(a, b, 1);
}
/*
** test_vluti2q_laneqmf8:
** luti2 v[0-9]+\.16b, {v[0-9]+\.16b}, v[0-9]+\[0\]
** luti2 v[0-9]+\.16b, {v[0-9]+\.16b}, v[0-9]+\[1\]
** luti2 v[0-9]+\.16b, {v[0-9]+\.16b}, v[0-9]+\[2\]
** luti2 v[0-9]+\.16b, {v[0-9]+\.16b}, v[0-9]+\[3\]
** ...
** ret
*/
void
test_vluti2q_laneqmf8(mfloat8x16_t a, uint8x16_t b, mfloat8x16_t results[])
{
results[0] = vluti2q_laneq_mf8(a, b, 0);
results[1] = vluti2q_laneq_mf8(a, b, 1);
results[2] = vluti2q_laneq_mf8(a, b, 2);
results[3] = vluti2q_laneq_mf8(a, b, 3);
}
/*
** test_vluti2_laneu16:
** luti2 v[0-9]+\.8h, {v[0-9]+\.8h}, v[0-9]+\[0\]
@ -688,6 +752,32 @@ test_vluti4q_laneqp8(poly8x16_t a, uint8x16_t b, poly8x16_t results[])
results[1] = vluti4q_laneq_p8(a, b, 1);
}
/*
** test_vluti4q_lanemf8:
** luti4 v[0-9]+\.16b, {v[0-9]+\.16b}, v[0-9]+\[0\]
** ...
** ret
*/
void
test_vluti4q_lanemf8(mfloat8x16_t a, uint8x8_t b, mfloat8x16_t results[])
{
results[0] = vluti4q_lane_mf8(a, b, 0);
}
/*
** test_vluti4q_laneqmf8:
** luti4 v[0-9]+\.16b, {v[0-9]+\.16b}, v[0-9]+\[0\]
** luti4 v[0-9]+\.16b, {v[0-9]+\.16b}, v[0-9]+\[1\]
** ...
** ret
*/
void
test_vluti4q_laneqmf8(mfloat8x16_t a, uint8x16_t b, mfloat8x16_t results[])
{
results[0] = vluti4q_laneq_mf8(a, b, 0);
results[1] = vluti4q_laneq_mf8(a, b, 1);
}
/*
** test_vluti4q_laneu16_x2:
** luti4 v[0-9]+\.8h, {v[0-9]+\.8h, v[0-9]+\.8h}, v[0-9]+\[0\]

File diff suppressed because it is too large Load diff

View file

@ -0,0 +1,98 @@
/* { dg-do assemble } */
/* { dg-additional-options "-O -std=gnu23 --save-temps" } */
#include <arm_neon.h>
void test(mfloat8x8_t x8, mfloat8x16_t x16,
mfloat8x8x2_t x8x2, mfloat8x16x2_t x16x2,
mfloat8x8x3_t x8x3, mfloat8x16x3_t x16x3,
mfloat8x8x4_t x8x4, mfloat8x16x4_t x16x4,
mfloat8_t *ptr, mfloat8_t scalar)
{
vcopy_lane_mf8(x8, -1, x8, 0); /* { dg-error {passing -1 to argument 2 of 'vcopy_lane_mf8', which expects a value in the range \[0, 7\]} } */
vcopy_lane_mf8(x8, 8, x8, 0); /* { dg-error {passing 8 to argument 2 of 'vcopy_lane_mf8', which expects a value in the range \[0, 7\]} } */
vcopy_lane_mf8(x8, 0, x8, -1); /* { dg-error {passing -1 to argument 4 of 'vcopy_lane_mf8', which expects a value in the range \[0, 7\]} } */
vcopy_lane_mf8(x8, 0, x8, 8); /* { dg-error {passing 8 to argument 4 of 'vcopy_lane_mf8', which expects a value in the range \[0, 7\]} } */
vcopy_lane_mf8(x8, 100, x8, 100); /* { dg-error {passing 100 to argument 2 of 'vcopy_lane_mf8', which expects a value in the range \[0, 7\]} } */
/* { dg-error {passing 100 to argument 4 of 'vcopy_lane_mf8', which expects a value in the range \[0, 7\]} "" { target *-*-* } .-1 } */
vcopy_laneq_mf8(x8, -1, x16, 0); /* { dg-error {passing -1 to argument 2 of 'vcopy_laneq_mf8', which expects a value in the range \[0, 7\]} } */
vcopy_laneq_mf8(x8, 8, x16, 0); /* { dg-error {passing 8 to argument 2 of 'vcopy_laneq_mf8', which expects a value in the range \[0, 7\]} } */
vcopy_laneq_mf8(x8, 0, x16, -1); /* { dg-error {passing -1 to argument 4 of 'vcopy_laneq_mf8', which expects a value in the range \[0, 15\]} } */
vcopy_laneq_mf8(x8, 0, x16, 16); /* { dg-error {passing 16 to argument 4 of 'vcopy_laneq_mf8', which expects a value in the range \[0, 15\]} } */
vcopyq_lane_mf8(x16, -1, x8, 0); /* { dg-error {passing -1 to argument 2 of 'vcopyq_lane_mf8', which expects a value in the range \[0, 15\]} } */
vcopyq_lane_mf8(x16, 16, x8, 0); /* { dg-error {passing 16 to argument 2 of 'vcopyq_lane_mf8', which expects a value in the range \[0, 15\]} } */
vcopyq_lane_mf8(x16, 0, x8, -1); /* { dg-error {passing -1 to argument 4 of 'vcopyq_lane_mf8', which expects a value in the range \[0, 7\]} } */
vcopyq_lane_mf8(x16, 0, x8, 8); /* { dg-error {passing 8 to argument 4 of 'vcopyq_lane_mf8', which expects a value in the range \[0, 7\]} } */
vcopyq_laneq_mf8(x16, -1, x16, 0); /* { dg-error {passing -1 to argument 2 of 'vcopyq_laneq_mf8', which expects a value in the range \[0, 15\]} } */
vcopyq_laneq_mf8(x16, 16, x16, 0); /* { dg-error {passing 16 to argument 2 of 'vcopyq_laneq_mf8', which expects a value in the range \[0, 15\]} } */
vcopyq_laneq_mf8(x16, 0, x16, -1); /* { dg-error {passing -1 to argument 4 of 'vcopyq_laneq_mf8', which expects a value in the range \[0, 15\]} } */
vcopyq_laneq_mf8(x16, 0, x16, 16); /* { dg-error {passing 16 to argument 4 of 'vcopyq_laneq_mf8', which expects a value in the range \[0, 15\]} } */
vdup_lane_mf8(x8, -1); /* { dg-error {passing -1 to argument 2 of 'vdup_lane_mf8', which expects a value in the range \[0, 7\]} } */
vdup_lane_mf8(x8, 8); /* { dg-error {passing 8 to argument 2 of 'vdup_lane_mf8', which expects a value in the range \[0, 7\]} } */
vdup_laneq_mf8(x16, -1); /* { dg-error {passing -1 to argument 2 of 'vdup_laneq_mf8', which expects a value in the range \[0, 15\]} } */
vdup_laneq_mf8(x16, 16); /* { dg-error {passing 16 to argument 2 of 'vdup_laneq_mf8', which expects a value in the range \[0, 15\]} } */
vdupq_lane_mf8(x8, -1); /* { dg-error {passing -1 to argument 2 of 'vdupq_lane_mf8', which expects a value in the range \[0, 7\]} } */
vdupq_lane_mf8(x8, 8); /* { dg-error {passing 8 to argument 2 of 'vdupq_lane_mf8', which expects a value in the range \[0, 7\]} } */
vdupq_laneq_mf8(x16, -1); /* { dg-error {passing -1 to argument 2 of 'vdupq_laneq_mf8', which expects a value in the range \[0, 15\]} } */
vdupq_laneq_mf8(x16, 16); /* { dg-error {passing 16 to argument 2 of 'vdupq_laneq_mf8', which expects a value in the range \[0, 15\]} } */
vdupb_lane_mf8(x8, -1); /* { dg-error {passing -1 to argument 2 of 'vdupb_lane_mf8', which expects a value in the range \[0, 7\]} } */
vdupb_lane_mf8(x8, 8); /* { dg-error {passing 8 to argument 2 of 'vdupb_lane_mf8', which expects a value in the range \[0, 7\]} } */
vdupb_laneq_mf8(x16, -1); /* { dg-error {passing -1 to argument 2 of 'vdupb_laneq_mf8', which expects a value in the range \[0, 15\]} } */
vdupb_laneq_mf8(x16, 16); /* { dg-error {passing 16 to argument 2 of 'vdupb_laneq_mf8', which expects a value in the range \[0, 15\]} } */
vext_mf8(x8, x8, -1); /* { dg-error {passing -1 to argument 3 of 'vext_mf8', which expects a value in the range \[0, 7\]} } */
vext_mf8(x8, x8, 8); /* { dg-error {passing 8 to argument 3 of 'vext_mf8', which expects a value in the range \[0, 7\]} } */
vextq_mf8(x16, x16, -1); /* { dg-error {passing -1 to argument 3 of 'vextq_mf8', which expects a value in the range \[0, 15\]} } */
vextq_mf8(x16, x16, 16); /* { dg-error {passing 16 to argument 3 of 'vextq_mf8', which expects a value in the range \[0, 15\]} } */
vld1_lane_mf8(ptr, x8, -1); /* { dg-error {passing -1 to argument 3 of 'vld1_lane_mf8', which expects a value in the range \[0, 7\]} } */
vld1_lane_mf8(ptr, x8, 8); /* { dg-error {passing 8 to argument 3 of 'vld1_lane_mf8', which expects a value in the range \[0, 7\]} } */
vld1q_lane_mf8(ptr, x16, -1); /* { dg-error {passing -1 to argument 3 of 'vld1q_lane_mf8', which expects a value in the range \[0, 15\]} } */
vld1q_lane_mf8(ptr, x16, 16); /* { dg-error {passing 16 to argument 3 of 'vld1q_lane_mf8', which expects a value in the range \[0, 15\]} } */
vld2_lane_mf8(ptr, x8x2, -1); /* { dg-error {passing -1 to argument 3 of 'vld2_lane_mf8', which expects a value in the range \[0, 7\]} } */
vld2_lane_mf8(ptr, x8x2, 8); /* { dg-error {passing 8 to argument 3 of 'vld2_lane_mf8', which expects a value in the range \[0, 7\]} } */
vld2q_lane_mf8(ptr, x16x2, -1); /* { dg-error {passing -1 to argument 3 of 'vld2q_lane_mf8', which expects a value in the range \[0, 15\]} } */
vld2q_lane_mf8(ptr, x16x2, 16); /* { dg-error {passing 16 to argument 3 of 'vld2q_lane_mf8', which expects a value in the range \[0, 15\]} } */
vld3_lane_mf8(ptr, x8x3, -1); /* { dg-error {passing -1 to argument 3 of 'vld3_lane_mf8', which expects a value in the range \[0, 7\]} } */
vld3_lane_mf8(ptr, x8x3, 8); /* { dg-error {passing 8 to argument 3 of 'vld3_lane_mf8', which expects a value in the range \[0, 7\]} } */
vld3q_lane_mf8(ptr, x16x3, -1); /* { dg-error {passing -1 to argument 3 of 'vld3q_lane_mf8', which expects a value in the range \[0, 15\]} } */
vld3q_lane_mf8(ptr, x16x3, 16); /* { dg-error {passing 16 to argument 3 of 'vld3q_lane_mf8', which expects a value in the range \[0, 15\]} } */
vld4_lane_mf8(ptr, x8x4, -1); /* { dg-error {passing -1 to argument 3 of 'vld4_lane_mf8', which expects a value in the range \[0, 7\]} } */
vld4_lane_mf8(ptr, x8x4, 8); /* { dg-error {passing 8 to argument 3 of 'vld4_lane_mf8', which expects a value in the range \[0, 7\]} } */
vld4q_lane_mf8(ptr, x16x4, -1); /* { dg-error {passing -1 to argument 3 of 'vld4q_lane_mf8', which expects a value in the range \[0, 15\]} } */
vld4q_lane_mf8(ptr, x16x4, 16); /* { dg-error {passing 16 to argument 3 of 'vld4q_lane_mf8', which expects a value in the range \[0, 15\]} } */
vset_lane_mf8(scalar, x8, -1); /* { dg-error {passing -1 to argument 3 of 'vset_lane_mf8', which expects a value in the range \[0, 7\]} } */
vset_lane_mf8(scalar, x8, 8); /* { dg-error {passing 8 to argument 3 of 'vset_lane_mf8', which expects a value in the range \[0, 7\]} } */
vsetq_lane_mf8(scalar, x16, -1); /* { dg-error {passing -1 to argument 3 of 'vsetq_lane_mf8', which expects a value in the range \[0, 15\]} } */
vsetq_lane_mf8(scalar, x16, 16); /* { dg-error {passing 16 to argument 3 of 'vsetq_lane_mf8', which expects a value in the range \[0, 15\]} } */
vst1_lane_mf8(ptr, x8, -1); /* { dg-error {passing -1 to argument 3 of 'vst1_lane_mf8', which expects a value in the range \[0, 7\]} } */
vst1_lane_mf8(ptr, x8, 8); /* { dg-error {passing 8 to argument 3 of 'vst1_lane_mf8', which expects a value in the range \[0, 7\]} } */
vst1q_lane_mf8(ptr, x16, -1); /* { dg-error {passing -1 to argument 3 of 'vst1q_lane_mf8', which expects a value in the range \[0, 15\]} } */
vst1q_lane_mf8(ptr, x16, 16); /* { dg-error {passing 16 to argument 3 of 'vst1q_lane_mf8', which expects a value in the range \[0, 15\]} } */
vst2_lane_mf8(ptr, x8x2, -1); /* { dg-error {passing -1 to argument 3 of 'vst2_lane_mf8', which expects a value in the range \[0, 7\]} } */
vst2_lane_mf8(ptr, x8x2, 8); /* { dg-error {passing 8 to argument 3 of 'vst2_lane_mf8', which expects a value in the range \[0, 7\]} } */
vst2q_lane_mf8(ptr, x16x2, -1); /* { dg-error {passing -1 to argument 3 of 'vst2q_lane_mf8', which expects a value in the range \[0, 15\]} } */
vst2q_lane_mf8(ptr, x16x2, 16); /* { dg-error {passing 16 to argument 3 of 'vst2q_lane_mf8', which expects a value in the range \[0, 15\]} } */
vst3_lane_mf8(ptr, x8x3, -1); /* { dg-error {passing -1 to argument 3 of 'vst3_lane_mf8', which expects a value in the range \[0, 7\]} } */
vst3_lane_mf8(ptr, x8x3, 8); /* { dg-error {passing 8 to argument 3 of 'vst3_lane_mf8', which expects a value in the range \[0, 7\]} } */
vst3q_lane_mf8(ptr, x16x3, -1); /* { dg-error {passing -1 to argument 3 of 'vst3q_lane_mf8', which expects a value in the range \[0, 15\]} } */
vst3q_lane_mf8(ptr, x16x3, 16); /* { dg-error {passing 16 to argument 3 of 'vst3q_lane_mf8', which expects a value in the range \[0, 15\]} } */
vst4_lane_mf8(ptr, x8x4, -1); /* { dg-error {passing -1 to argument 3 of 'vst4_lane_mf8', which expects a value in the range \[0, 7\]} } */
vst4_lane_mf8(ptr, x8x4, 8); /* { dg-error {passing 8 to argument 3 of 'vst4_lane_mf8', which expects a value in the range \[0, 7\]} } */
vst4q_lane_mf8(ptr, x16x4, -1); /* { dg-error {passing -1 to argument 3 of 'vst4q_lane_mf8', which expects a value in the range \[0, 15\]} } */
vst4q_lane_mf8(ptr, x16x4, 16); /* { dg-error {passing 16 to argument 3 of 'vst4q_lane_mf8', which expects a value in the range \[0, 15\]} } */
}

View file

@ -6,6 +6,92 @@
extern void abort (void);
mfloat8x8_t __attribute__ ((noinline))
wrap_vdup_lane_mf8_0 (mfloat8x8_t a)
{
return vdup_lane_mf8 (a, 0);
}
mfloat8x8_t __attribute__ ((noinline))
wrap_vdup_lane_mf8_1 (mfloat8x8_t a)
{
return vdup_lane_mf8 (a, 1);
}
int __attribute__ ((noinline))
test_vdup_lane_mf8 ()
{
mfloat8_t m;
uint8_t n = 11;
mfloat8x8_t a;
mfloat8x8_t b;
int i;
/* Only two first cases are interesting. */
mfloat8_t c[8];
mfloat8_t d[8];
__builtin_memcpy(&m, &n, 1);
b = vdup_n_mf8 (m);
vst1_mf8 (d, b);
a = vld1_mf8 (c);
b = wrap_vdup_lane_mf8_0 (a);
vst1_mf8 (d, b);
for (i = 0; i < 8; i++)
if (__builtin_memcmp (&c[0], &d[i], 1) != 0)
return 1;
b = wrap_vdup_lane_mf8_1 (a);
vst1_mf8 (d, b);
for (i = 0; i < 8; i++)
if (__builtin_memcmp (&c[1], &d[i], 1) != 0)
return 1;
return 0;
}
mfloat8x16_t __attribute__ ((noinline))
wrap_vdupq_lane_mf8_0 (mfloat8x8_t a)
{
return vdupq_lane_mf8 (a, 0);
}
mfloat8x16_t __attribute__ ((noinline))
wrap_vdupq_lane_mf8_1 (mfloat8x8_t a)
{
return vdupq_lane_mf8 (a, 1);
}
int __attribute__ ((noinline))
test_vdupq_lane_mf8 ()
{
mfloat8_t m;
uint8_t n = 11;
mfloat8x8_t a;
mfloat8x16_t b;
int i;
/* Only two first cases are interesting. */
mfloat8_t c[8];
mfloat8_t d[16];
__builtin_memcpy(&m, &n, 1);
b = vdupq_n_mf8 (m);
vst1q_mf8 (d, b);
a = vld1_mf8 (c);
b = wrap_vdupq_lane_mf8_0 (a);
vst1q_mf8 (d, b);
for (i = 0; i < 16; i++)
if (__builtin_memcmp (&c[0], &d[i], 1) != 0)
return 1;
b = wrap_vdupq_lane_mf8_1 (a);
vst1q_mf8 (d, b);
for (i = 0; i < 16; i++)
if (__builtin_memcmp (&c[1], &d[i], 1) != 0)
return 1;
return 0;
}
float32x2_t __attribute__ ((noinline))
wrap_vdup_lane_f32_0 (float32x2_t a)
{
@ -350,7 +436,10 @@ test_vdupq_lane_s64 ()
int
main ()
{
if (test_vdup_lane_mf8 ())
abort ();
if (test_vdupq_lane_mf8 ())
abort ();
if (test_vdup_lane_f32 ())
abort ();
if (test_vdup_lane_s8 ())
@ -376,12 +465,12 @@ main ()
}
/* Asm check for test_vdup_lane_s8. */
/* { dg-final { scan-assembler-times "dup\\tv\[0-9\]+\.8b, v\[0-9\]+\.b\\\[0\\\]" 1 } } */
/* { dg-final { scan-assembler-times "dup\\tv\[0-9\]+\.8b, v\[0-9\]+\.b\\\[1\\\]" 1 } } */
/* { dg-final { scan-assembler-times "dup\\tv\[0-9\]+\.8b, v\[0-9\]+\.b\\\[0\\\]" 2 } } */
/* { dg-final { scan-assembler-times "dup\\tv\[0-9\]+\.8b, v\[0-9\]+\.b\\\[1\\\]" 2 } } */
/* Asm check for test_vdupq_lane_s8. */
/* { dg-final { scan-assembler-times "dup\\tv\[0-9\]+\.16b, v\[0-9\]+\.b\\\[0\\\]" 1 } } */
/* { dg-final { scan-assembler-times "dup\\tv\[0-9\]+\.16b, v\[0-9\]+\.b\\\[1\\\]" 1 } } */
/* { dg-final { scan-assembler-times "dup\\tv\[0-9\]+\.16b, v\[0-9\]+\.b\\\[0\\\]" 2 } } */
/* { dg-final { scan-assembler-times "dup\\tv\[0-9\]+\.16b, v\[0-9\]+\.b\\\[1\\\]" 2 } } */
/* Asm check for test_vdup_lane_s16. */
/* { dg-final { scan-assembler-times "dup\\tv\[0-9\]+\.4h, v\[0-9\]+\.h\\\[0\\\]" 1 } } */

View file

@ -11,6 +11,45 @@
extern void abort (void);
mfloat8_t __attribute__ ((noinline))
wrap_vdupb_lane_mf8_0 (mfloat8x8_t dummy, mfloat8x8_t a)
{
mfloat8_t result = vdupb_lane_mf8 (a, 0);
force_simd (result);
return result;
}
mfloat8_t __attribute__ ((noinline))
wrap_vdupb_lane_mf8_1 (mfloat8x8_t a)
{
mfloat8_t result = vdupb_lane_mf8 (a, 1);
force_simd (result);
return result;
}
int __attribute__ ((noinline))
test_vdupb_lane_mf8 ()
{
mfloat8_t m;
uint8_t n = 11;
mfloat8x8_t a;
mfloat8_t b;
mfloat8_t c[8];
__builtin_memcpy(&m, &n, 1);
a = vdup_n_mf8 (m);
vst1_mf8 (c, a);
b = wrap_vdupb_lane_mf8_0 (a, a);
if (__builtin_memcmp (&c[0], &b, 1) != 0)
return 1;
b = wrap_vdupb_lane_mf8_1 (a);
if (__builtin_memcmp (&c[1], &b, 1) != 0)
return 1;
return 0;
}
float32_t __attribute__ ((noinline))
wrap_vdups_lane_f32_0 (float32x2_t dummy, float32x2_t a)
{
@ -300,6 +339,8 @@ test_vdupd_lane_s64 ()
int
main ()
{
if (test_vdupb_lane_mf8 ())
abort ();
if (test_vdups_lane_f32 ())
abort ();
if (test_vdupd_lane_f64 ())
@ -323,9 +364,9 @@ main ()
return 0;
}
/* Asm check for vdupb_lane_s8, vdupb_lane_u8. */
/* Asm check for vdupb_lane_s8, vdupb_lane_u8, and vdupb_lane_mf8. */
/* { dg-final { scan-assembler-not "dup\\tb\[0-9\]+, v\[0-9\]+\.b\\\[0\\\]" } } */
/* { dg-final { scan-assembler-times "dup\\tb\[0-9\]+, v\[0-9\]+\.b\\\[1\\\]" 2 } } */
/* { dg-final { scan-assembler-times "dup\\tb\[0-9\]+, v\[0-9\]+\.b\\\[1\\\]" 3 } } */
/* Asm check for vduph_lane_h16, vduph_lane_h16. */
/* { dg-final { scan-assembler-not "dup\\th\[0-9\]+, v\[0-9\]+\.h\\\[0\\\]" } } */

View file

@ -6,6 +6,48 @@
extern void abort (void);
mfloat8x8_t __attribute__ ((noinline))
wrap_vdup_n_mf8 (mfloat8_t a)
{
return vdup_n_mf8 (a);
}
int __attribute__ ((noinline))
test_vdup_n_mf8 (mfloat8_t a)
{
mfloat8x8_t b;
mfloat8_t c[8];
int i;
b = wrap_vdup_n_mf8 (a);
vst1_mf8 (c, b);
for (i = 0; i < 8; i++)
if (__builtin_memcmp (&a, &c[i], 1) != 0)
return 1;
return 0;
}
mfloat8x16_t __attribute__ ((noinline))
wrap_vdupq_n_mf8 (mfloat8_t a)
{
return vdupq_n_mf8 (a);
}
int __attribute__ ((noinline))
test_vdupq_n_mf8 (mfloat8_t a)
{
mfloat8x16_t b;
mfloat8_t c[16];
int i;
b = wrap_vdupq_n_mf8 (a);
vst1q_mf8 (c, b);
for (i = 0; i < 16; i++)
if (__builtin_memcmp (&a, &c[i], 1) != 0)
return 1;
return 0;
}
float32x2_t __attribute__ ((noinline))
wrap_vdup_n_f32 (float32_t a)
{
@ -537,6 +579,16 @@ test_vdupq_n_u64 ()
int
main ()
{
mfloat8_t a, c;
uint8_t b = 11;
uint8_t d = 12;
__builtin_memcpy(&a, &b, 1);
__builtin_memcpy(&c, &d, 1);
if (test_vdup_n_mf8(a))
abort ();
if (test_vdupq_n_mf8(c))
abort ();
if (test_vdup_n_f32 ())
abort ();
if (test_vdup_n_f64 ())
@ -591,12 +643,16 @@ main ()
/* No asm checks for vdup_n_f32, vdupq_n_f32, vdup_n_f64 and vdupq_n_f64.
Cannot force floating point value in general purpose regester. */
/* Asm check for test_vdup_n_p8, test_vdup_n_s8, test_vdup_n_u8. */
/* { dg-final { scan-assembler-times "dup\\tv\[0-9\]+\.8b, w\[0-9\]+" 3 } } */
/* Asm check for test_vdup_n_mf8, test_vdup_n_p8, test_vdup_n_s8,
test_vdup_n_u8. */
/* { dg-final { scan-assembler-times "dup\\tv\[0-9\]+\.8b, w\[0-9\]+" 5 } } */
/* Asm check for test_vdupq_n_p8, test_vdupq_n_s8, test_vdupq_n_u8. */
/* { dg-final { scan-assembler-times "dup\\tv\[0-9\]+\.16b, w\[0-9\]+" 3 } } */
/* Asm check for test_vdupq_n_mf8. */
/* { dg-final { scan-assembler-times "dup\\tv\[0-9\]+\.16b, v\[0-9\]+\.b\\\[0\\\]" 1 } } */
/* Asm check for test_vdup_n_p16, test_vdup_n_s16, test_vdup_n_u16. */
/* { dg-final { scan-assembler-times "dup\\tv\[0-9\]+\.4h, w\[0-9\]+" 3 } } */

View file

@ -14,7 +14,8 @@ test_copy##Q1##_lane##Q2##_##SUFFIX (TYPE1 a, TYPE2 b) \
BUILD_TEST (poly8x8_t, poly8x8_t, , , p8, 7, 6)
BUILD_TEST (int8x8_t, int8x8_t, , , s8, 7, 6)
BUILD_TEST (uint8x8_t, uint8x8_t, , , u8, 7, 6)
/* { dg-final { scan-assembler-times "ins\\tv0.b\\\[7\\\], v1.b\\\[6\\\]" 3 } } */
BUILD_TEST (mfloat8x8_t, mfloat8x8_t, , , mf8, 7, 6)
/* { dg-final { scan-assembler-times "ins\\tv0.b\\\[7\\\], v1.b\\\[6\\\]" 4 } } */
BUILD_TEST (poly16x4_t, poly16x4_t, , , p16, 3, 2)
BUILD_TEST (int16x4_t, int16x4_t, , , s16, 3, 2)
BUILD_TEST (uint16x4_t, uint16x4_t, , , u16, 3, 2)
@ -33,7 +34,8 @@ BUILD_TEST (float64x1_t, float64x1_t, , , f64, 0, 0)
BUILD_TEST (poly8x8_t, poly8x16_t, , q, p8, 7, 15)
BUILD_TEST (int8x8_t, int8x16_t, , q, s8, 7, 15)
BUILD_TEST (uint8x8_t, uint8x16_t, , q, u8, 7, 15)
/* { dg-final { scan-assembler-times "ins\\tv0.b\\\[7\\\], v1.b\\\[15\\\]" 3 } } */
BUILD_TEST (mfloat8x8_t, mfloat8x16_t, , q, mf8, 7, 15)
/* { dg-final { scan-assembler-times "ins\\tv0.b\\\[7\\\], v1.b\\\[15\\\]" 4 } } */
BUILD_TEST (poly16x4_t, poly16x8_t, , q, p16, 3, 7)
BUILD_TEST (int16x4_t, int16x8_t, , q, s16, 3, 7)
BUILD_TEST (uint16x4_t, uint16x8_t, , q, u16, 3, 7)
@ -51,7 +53,8 @@ BUILD_TEST (uint64x1_t, uint64x2_t, , q, u64, 0, 1)
BUILD_TEST (poly8x16_t, poly8x8_t, q, , p8, 15, 7)
BUILD_TEST (int8x16_t, int8x8_t, q, , s8, 15, 7)
BUILD_TEST (uint8x16_t, uint8x8_t, q, , u8, 15, 7)
/* { dg-final { scan-assembler-times "ins\\tv0.b\\\[15\\\], v1.b\\\[7\\\]" 3 } } */
BUILD_TEST (mfloat8x16_t, mfloat8x8_t, q, , mf8, 15, 7)
/* { dg-final { scan-assembler-times "ins\\tv0.b\\\[15\\\], v1.b\\\[7\\\]" 4 } } */
BUILD_TEST (poly16x8_t, poly16x4_t, q, , p16, 7, 3)
BUILD_TEST (int16x8_t, int16x4_t, q, , s16, 7, 3)
BUILD_TEST (uint16x8_t, uint16x4_t, q, , u16, 7, 3)
@ -70,7 +73,8 @@ BUILD_TEST (uint64x2_t, uint64x1_t, q, , u64, 1, 0)
BUILD_TEST (poly8x16_t, poly8x16_t, q, q, p8, 14, 15)
BUILD_TEST (int8x16_t, int8x16_t, q, q, s8, 14, 15)
BUILD_TEST (uint8x16_t, uint8x16_t, q, q, u8, 14, 15)
/* { dg-final { scan-assembler-times "ins\\tv0.b\\\[14\\\], v1.b\\\[15\\\]" 3 } } */
BUILD_TEST (mfloat8x16_t, mfloat8x16_t, q, q, mf8, 14, 15)
/* { dg-final { scan-assembler-times "ins\\tv0.b\\\[14\\\], v1.b\\\[15\\\]" 4 } } */
BUILD_TEST (poly16x8_t, poly16x8_t, q, q, p16, 6, 7)
BUILD_TEST (int16x8_t, int16x8_t, q, q, s16, 6, 7)
BUILD_TEST (uint16x8_t, uint16x8_t, q, q, u16, 6, 7)