Commit Graph

727 Commits (33d7a288298f02df3eadd509735f0f75e3f80d73)

Author SHA1 Message Date
Avinesh Kumar 4aaefd93b9 target-ppc: fix invalid mask - cmpl, bctar
cmpl:  invalid bit mask should be 0x00400001
bctar: invalid bit mask should be 0x0000E000

Signed-off-by: Avinesh Kumar <avinesku@linux.vnet.ibm.com>
Signed-off-by: Rajalakshmi Srinivasaraghavan <raji@linux.vnet.ibm.com>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2016-10-05 11:05:28 +11:00
Nikunj A Dadhania d76ab5e1c7 target-ppc: tlbie/tlbivax should have global effect
tlbie (BookS) and tlbivax (BookE) plus the H_CALLs(pseries) should have
a global effect.

Introduces TLB_NEED_GLOBAL_FLUSH flag. During lazy tlb flush, after
taking care of pending local flushes, check broadcast flush(at context
synchronizing event ptesync/tlbsync, etc) is needed. Depending on the
bitmask state of the tlb_need_flush, tlb is flushed from other cpus if
needed and the flags are cleared.

Suggested-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Nikunj A Dadhania <nikunj@linux.vnet.ibm.com>
Reviewed-by: David Gibson <david@gibson.dropbear.id.au>
[dwg: Use 'true' instead of '1' for call to check_tlb_flush()]
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2016-09-23 12:39:07 +10:00
Nikunj A Dadhania e3cffe6fad target-ppc: add flag in check_tlb_flush()
We flush the qemu TLB lazily. check_tlb_flush is called whenever we hit
a context synchronizing event or instruction that requires a pending
flush to be performed.

However, we fail to handle broadcast TLB flush operations. In order to
fix that efficiently, we want to differentiate whether check_tlb_flush()
needs to only apply pending local flushes (isync instructions,
interrupts, ...) or also global pending flush operations. The latter is
only needed when executing instructions that are defined architecturally
as synchronizing global TLB flush operations. This in our case is
ptesync on BookS and tlbsync on BookE along with the paravirtualized
hypervisor calls.

Signed-off-by: Nikunj A Dadhania <nikunj@linux.vnet.ibm.com>
[dwg: Changed gen_check_tlb_flush() to also take a bool, and fixed
 some spelling errors in commit message]
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2016-09-23 12:39:07 +10:00
Ravi Bangoria fec5c62a64 target-ppc: implement darn instruction
darn: Deliver A Random Number

Currently return invalid random number for all the case. This needs
proper algorithm to provide cryptographically suitable random data.
Reading from /dev/random can block and that is not an expected behaviour
while the cpu instruction is getting executed. Moreover, /dev/random
would only work for linux-user

Signed-off-by: Ravi Bangoria <ravi.bangoria@linux.vnet.ibm.com>
Signed-off-by: Nikunj A Dadhania <nikunj@linux.vnet.ibm.com>
[dwg: Added minor clang warning fix for ppc32 target]
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2016-09-23 12:38:40 +10:00
Nikunj A Dadhania ddb9ac50ae target-ppc: add stxsi[bh]x instruction
stxsibx - Store VSX Scalar as Integer Byte Indexed
stxsihx - Store VSX Scalar as Integer Halfword Indexed

Signed-off-by: Nikunj A Dadhania <nikunj@linux.vnet.ibm.com>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2016-09-23 10:29:40 +10:00
Nikunj A Dadhania 740ae9a27f target-ppc: add lxsi[bw]zx instruction
lxsibzx - Load VSX Scalar as Integer Byte & Zero Indexed
lxsihzx - Load VSX Scalar as Integer Halfword & Zero Indexed

Signed-off-by: Nikunj A Dadhania <nikunj@linux.vnet.ibm.com>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2016-09-23 10:29:40 +10:00
Nikunj A Dadhania f113283525 target-ppc: add xxspltib instruction
xxspltib: VSX Vector Splat Immediate Byte

Copy the immediate byte in each byte of target VSR

Signed-off-by: Nikunj A Dadhania <nikunj@linux.vnet.ibm.com>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2016-09-23 10:29:40 +10:00
Nikunj A Dadhania 2391b35773 target-ppc: consolidate store conditional
Use tcg_gen_qemu_st store conditional instructions.

Signed-off-by: Nikunj A Dadhania <nikunj@linux.vnet.ibm.com>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2016-09-23 10:29:40 +10:00
Nikunj A Dadhania aa2008af0c target-ppc: move out stqcx impementation
Being a 16byte operation, qemu_ld/st still does not support this. Move
this out so other store operation can use qemu_ld/st in the following
patch. Also, convert it to two MO_Q operations for stqcx.

Signed-off-by: Nikunj A Dadhania <nikunj@linux.vnet.ibm.com>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2016-09-23 10:29:40 +10:00
Nikunj A Dadhania 48793c95c9 target-ppc: consolidate load with reservation
Use tcg_gen_qemu_ld in the load with reservation instructions.

Signed-off-by: Nikunj A Dadhania <nikunj@linux.vnet.ibm.com>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2016-09-23 10:29:40 +10:00
Nikunj A Dadhania 804108aaf9 target-ppc: convert st[16,32,64]r to use new macro
Make byte-swap routines use the common GEN_QEMU_STORE macro

Signed-off-by: Nikunj A Dadhania <nikunj@linux.vnet.ibm.com>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2016-09-23 10:29:40 +10:00
Nikunj A Dadhania 2468f23dcb target-ppc: convert st64 to use new macro
Use macro for st64 as well, this changes the function signature from
gen_qemu_st64 => gen_qemu_st64_i64. Replace this at all the call sites.

Signed-off-by: Nikunj A Dadhania <nikunj@linux.vnet.ibm.com>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2016-09-23 10:29:40 +10:00
Nikunj A Dadhania 761a89c641 target-ppc: consolidate store operations
Implement macro to consolidate store operations using newer
tcg_gen_qemu_st function.

Signed-off-by: Nikunj A Dadhania <nikunj@linux.vnet.ibm.com>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2016-09-23 10:29:40 +10:00
Nikunj A Dadhania ff5f3981a2 target-ppc: convert ld[16,32,64]ur to use new macro
Make byte-swap routines use the common GEN_QEMU_LOAD macro

Signed-off-by: Nikunj A Dadhania <nikunj@linux.vnet.ibm.com>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2016-09-23 10:29:40 +10:00
Nikunj A Dadhania 4f364fe76f target-ppc: convert ld64 to use new macro
Use macro for ld64 as well, this changes the function signature from
gen_qemu_ld64 => gen_qemu_ld64_i64. Replace this at all the call sites.

Signed-off-by: Nikunj A Dadhania <nikunj@linux.vnet.ibm.com>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2016-09-23 10:29:40 +10:00
Nikunj A Dadhania 09bfe50d57 target-ppc: consolidate load operations
Implement macro to consolidate load operations using newer
tcg_gen_qemu_ld functions.

Signed-off-by: Nikunj A Dadhania <nikunj@linux.vnet.ibm.com>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2016-09-23 10:29:40 +10:00
Rajalakshmi Srinivasaraghavan e7b1e06fbc target-ppc: add vector insert instructions
The following vector insert instructions are added from ISA 3.0.

vinsertb - Vector Insert Byte
vinserth - Vector Insert Halfword
vinsertw - Vector Insert Word
vinsertd - Vector Insert Doubleword

Signed-off-by: Rajalakshmi Srinivasaraghavan <raji@linux.vnet.ibm.com>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2016-09-23 10:29:40 +10:00
Benjamin Herrenschmidt 6ca038c292 ppc: restrict the use of the rfi instruction
Power ISA 2.x has deleted the rfi instruction and rfid shoud be used
instead on cpus following this instruction set or later.

This will raise an invalid exception when rfi is used on such
processors: Book3S 64-bit processors.

Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Reviewed-by: David Gibson <david@gibson.dropbear.id.au>
[clg: the required fix in openbios, commit b747b6acc272 ('ppc: use
      rfid when running under a CPU from the 970 family.'), is now
      merged in qemu under commit 5cebd885d0 ('Update OpenBIOS
      images to b747b6a built from submodule.') ]
Signed-off-by: Cédric Le Goater <clg@kaod.org>
Reviewed-by: Thomas Huth <thuth@redhat.com>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2016-09-23 10:29:40 +10:00
Benjamin Herrenschmidt accc60c47c ppc: Don't generate dead code on unconditional branches
We are always generating the "else" case of the condition even when
generating an unconditional branch that will never hit it.

Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Reviewed-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2016-09-07 12:40:12 +10:00
Benjamin Herrenschmidt 15848410af ppc: Rename #include'd .c files to .inc.c
Also while at it, group the #include statements in translate.c

Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2016-09-07 12:40:12 +10:00
Nikunj A Dadhania 787bbe3711 target-ppc: add extswsli[.] instruction
extswsli : Extend Sign Word & Shift Left Immediate

Signed-off-by: Nikunj A Dadhania <nikunj@linux.vnet.ibm.com>
Reviewed-by: David Gibson <david@gibson.dropbear.id.au>
Reviewed-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2016-09-07 12:40:12 +10:00
Nikunj A Dadhania 4110b586de target-ppc: implement branch-less divd[o][.]
Similar to divw, implement branch-less divd.

Signed-off-by: Nikunj A Dadhania <nikunj@linux.vnet.ibm.com>
Reviewed-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2016-09-07 12:40:11 +10:00
Nikunj A Dadhania b07c32dc4b target-ppc: implement branch-less divw[o][.]
While implementing modulo instructions figured out that the
implementation uses many branches. Change the logic to achieve the
branch-less code. Undefined value is set to dividend in case of invalid
input.

Signed-off-by: Nikunj A Dadhania <nikunj@linux.vnet.ibm.com>
Reviewed-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2016-09-07 12:40:11 +10:00
Benjamin Herrenschmidt 5817355ed0 ppc: load/store multiple and string insns don't do LE
Just generate an alignment interrupt

Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2016-09-07 12:40:11 +10:00
Benjamin Herrenschmidt 65f2475f1f ppc: Use a helper to generate "LE unsupported" alignment interrupts
Some operations aren't allowed in LE mode, use a helper rather than
open coding the exception generation.

Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2016-09-07 12:40:11 +10:00
Benjamin Herrenschmidt 5f2a625452 ppc: Don't set access_type on all load/stores on hash64
We don't use it so let's not generate the updates.

Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2016-09-07 12:40:11 +10:00
Benjamin Herrenschmidt fbc3b39b39 ppc: Fix CFAR updates
We were one instruction off

Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2016-09-07 12:40:11 +10:00
Benjamin Herrenschmidt c9f82d013b ppc: Speed up dcbz
Use tlb_vaddr_to_host to do a fast path single translate for
the whole cache line. Also make the reservation check match
the entire range.

Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2016-09-07 12:40:11 +10:00
Benjamin Herrenschmidt 22b56ee568 ppc: Handle unconditional (always/never) traps at translation time
We don't need to call a helper for trap always and trap never
which are used by Linux under some circumstances.

Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
--

v2. Don't generate the helper call when trapping always
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2016-09-07 12:40:11 +10:00
Benjamin Herrenschmidt 3433b732a4 ppc: Make alignment exceptions suck less
The current alignment exception generation tries to load the opcode
to put in DSISR from a context where a cpu_ldl_code() is really not
a good idea. It might fault and longjmp out and that's not something
we want happening here.

Instead, pass the releavant opcode bits via the error_code.

There are a couple of cases of alignment interrupts that won't set
anything, the ones coming from access to direct store segments, but
that doesn't happen in practice, nobody used direct store segments
and they are gone from newer chips.

Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2016-09-07 12:40:10 +10:00
Benjamin Herrenschmidt b00a3b3648 ppc: Don't update NIP in dcbz and lscbx
Instead, pass GETPC() result to the corresponding helpers.

Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2016-09-07 12:40:10 +10:00
Benjamin Herrenschmidt 573708e329 ppc: Don't update NIP if not taking alignment exceptions
Move the NIP update to after the conditional branch so that we
don't do it if we aren't going to take the alignment exception

Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2016-09-07 12:40:10 +10:00
Benjamin Herrenschmidt 72073dcce0 ppc: Don't update NIP on conditional trap instructions
This is no longer necessary as the helpers will properly retrieve
the return address when needed.

Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2016-09-07 12:40:10 +10:00
Benjamin Herrenschmidt 8c8966e218 ppc: Don't update NIP BookE 2.06 tlbwe
This is no longer necessary as the helpers will properly retrieve
the return address when needed.

Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2016-09-07 12:40:10 +10:00
Benjamin Herrenschmidt 57a2988b6f ppc: Don't update NIP in facility unavailable interrupts
This is no longer necessary as the helpers will properly retrieve
the return address when needed. Also remove gen_update_current_nip()
which didn't seem to make much sense to me.

Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2016-09-07 12:40:10 +10:00
Benjamin Herrenschmidt a13f0a9bc4 ppc: Don't update NIP in DCR access routines
This is no longer necessary as the helpers will properly retrieve
the return address when needed

Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2016-09-07 12:40:10 +10:00
Benjamin Herrenschmidt bd6fefe71c ppc: Make tlb_fill() use new exception helper
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2016-09-07 12:40:03 +10:00
Benjamin Herrenschmidt af6d376ea1 ppc: Don't update NIP in lmw/stmw/icbi
Instead, pass GETPC() result to the corresponding helpers.

Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2016-09-07 12:38:00 +10:00
Benjamin Herrenschmidt e41029b378 ppc: Don't update NIP in lswi/lswx/stswi/stswx
Instead, pass GETPC() result to the corresponding helpers. This
requires a bit of fiddling to get the PC (hopefully) right in
the case where we generate a program check, though the hacks there
are temporary, a subsequent patch will clean this all up by always
having the nip already set to the right instruction when taking
the fault.

Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
[dwg: Fix trivial checkpatch warning]
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2016-09-07 12:37:48 +10:00
Benjamin Herrenschmidt 3014427af5 ppc: Move VSX ops out of translate.c
Makes things a bit more manageable

Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2016-09-07 12:33:46 +10:00
Benjamin Herrenschmidt 0304af897b ppc: Move VMX ops out of translate.c
Makes things a bit more manageable

Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2016-09-07 12:33:46 +10:00
Benjamin Herrenschmidt 8b25cdd371 ppc: Move DFP ops out of translate.c
Makes things a bit more manageable

Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2016-09-07 12:33:46 +10:00
Benjamin Herrenschmidt 4083de6b53 ppc: Move embedded spe ops out of translate.c
Makes things a bit more manageable

Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2016-09-07 12:33:46 +10:00
Benjamin Herrenschmidt f96511215d ppc: Move classic fp ops out of translate.c
Makes things a bit more manageable

Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2016-09-07 12:33:46 +10:00
Nikunj A Dadhania 323ad19bcc target-ppc: introduce opc4 for Expanded Opcode
ISA 3.0 has introduced EO - Expanded Opcode. Introduce third level
indirect opcode table and corresponding parsing routines.

EO (11:12) Expanded opcode field
Formats: XX1

EO (11:15) Expanded opcode field
Formats: VX, X, XX2

Signed-off-by: Nikunj A Dadhania <nikunj@linux.vnet.ibm.com>
[dwg: Trivial checkpatch fixup]
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2016-09-07 12:33:33 +10:00
Nikunj A Dadhania 5f29cc8292 target-ppc: add maddhd and maddhdu instruction
maddhd: Multiply-Add High Doubleword
maddhdu: Multiply-Add High Doubleword Unsigned

Above two instruction are dual form and differ by 1 bit
(31st bit)

Multiplies two 64-bit registers (RA * RB), adds third register(RC) to
the result(quadword) and returns the higher dword in the target
register(RT).

Signed-off-by: Nikunj A Dadhania <nikunj@linux.vnet.ibm.com>
Reviewed-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2016-09-07 09:52:14 +10:00
Nikunj A Dadhania aeeb044c7b target-ppc: add maddld instruction
maddld: Multiply-Add Low Doubleword

Multiplies two 64-bit registers (RA * RB), adds third register(RC) to
the result(quadword) and returns the lower dword in the target
register(RT).

Signed-off-by: Nikunj A Dadhania <nikunj@linux.vnet.ibm.com>
Reviewed-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2016-09-07 09:52:14 +10:00
Vivek Andrew Sha dc2ee038da target-ppc: add setb instruction
The CR number is provided in the opcode as - BFA (11:13)

Returns:
  -1 if bit 0 of CR field is set
   1 if bit 1 of CR field is set
   0 otherwise.

Signed-off-by: Vivek Andrew Sha <vivekandrewsha@gmail.com>
[ reworded commit, used 32bit ops as crf is 32bits ]
Signed-off-by: Nikunj A Dadhania <nikunj@linux.vnet.ibm.com>
Reviewed-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2016-09-07 09:52:14 +10:00
Nikunj A Dadhania 082ce33005 target-ppc: add cmpeqb instruction
Search a byte in the stream of 8bytes provided in the register

Suggested-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Nikunj A Dadhania <nikunj@linux.vnet.ibm.com>
Reviewed-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2016-09-07 09:52:14 +10:00
Nikunj A Dadhania b35344e4a0 target-ppc: add cnttzw[.] instruction
Add ISA3.0: Count trailing zeros word instruction.

Signed-off-by: Nikunj A Dadhania <nikunj@linux.vnet.ibm.com>
Reviewed-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2016-09-07 09:52:14 +10:00