Upgrade to PCRE 8.37 due to various bugfixes

This commit is contained in:
Stanislav Malyshev 2015-04-29 22:25:02 -07:00
parent 9c5c3ff022
commit 95fa727992
36 changed files with 47123 additions and 192 deletions

2
NEWS
View File

@ -3,7 +3,7 @@ PHP NEWS
?? ??? 2015 PHP 5.4.41
- PCRE
. Upgraded pcrelib to 8.36.
. Upgraded pcrelib to 8.37.
16 Apr 2015 PHP 5.4.40

View File

@ -3,7 +3,7 @@
EXTENSION("pcre", "php_pcre.c", false /* never shared */,
"-Iext/pcre/pcrelib");
ADD_SOURCES("ext/pcre/pcrelib", "pcre_chartables.c pcre_ucd.c pcre_compile.c pcre_config.c pcre_exec.c pcre_fullinfo.c pcre_get.c pcre_globals.c pcre_maketables.c pcre_newline.c pcre_ord2utf8.c pcre_refcount.c pcre_study.c pcre_tables.c pcre_valid_utf8.c pcre_version.c pcre_xclass.c", "pcre");
ADD_SOURCES("ext/pcre/pcrelib", "pcre_chartables.c pcre_ucd.c pcre_compile.c pcre_config.c pcre_exec.c pcre_fullinfo.c pcre_get.c pcre_globals.c pcre_maketables.c pcre_newline.c pcre_ord2utf8.c pcre_refcount.c pcre_study.c pcre_tables.c pcre_valid_utf8.c pcre_version.c pcre_xclass.c pcre_jit_compile.c", "pcre");
ADD_DEF_FILE("ext\\pcre\\php_pcre.def");
AC_DEFINE('HAVE_BUNDLED_PCRE', 1, 'Using bundled PCRE library');

View File

@ -58,7 +58,8 @@ PHP_ARG_WITH(pcre-regex,,
pcrelib/pcre_maketables.c pcrelib/pcre_newline.c \
pcrelib/pcre_ord2utf8.c pcrelib/pcre_refcount.c pcrelib/pcre_study.c \
pcrelib/pcre_tables.c pcrelib/pcre_valid_utf8.c \
pcrelib/pcre_version.c pcrelib/pcre_xclass.c"
pcrelib/pcre_version.c pcrelib/pcre_xclass.c \
pcrelib/pcre_jit_compile.c"
PHP_PCRE_CFLAGS="-DHAVE_CONFIG_H -I@ext_srcdir@/pcrelib"
PHP_NEW_EXTENSION(pcre, $pcrelib_sources php_pcre.c, no,,$PHP_PCRE_CFLAGS)
PHP_ADD_BUILD_DIR($ext_builddir/pcrelib)

View File

@ -8,7 +8,7 @@ Email domain: cam.ac.uk
University of Cambridge Computing Service,
Cambridge, England.
Copyright (c) 1997-2014 University of Cambridge
Copyright (c) 1997-2015 University of Cambridge
All rights reserved
@ -19,7 +19,7 @@ Written by: Zoltan Herczeg
Email local part: hzmester
Emain domain: freemail.hu
Copyright(c) 2010-2014 Zoltan Herczeg
Copyright(c) 2010-2015 Zoltan Herczeg
All rights reserved.
@ -30,7 +30,7 @@ Written by: Zoltan Herczeg
Email local part: hzmester
Emain domain: freemail.hu
Copyright(c) 2009-2014 Zoltan Herczeg
Copyright(c) 2009-2015 Zoltan Herczeg
All rights reserved.

View File

@ -1,6 +1,173 @@
ChangeLog for PCRE
------------------
Version 8.37 28-April-2015
--------------------------
1. When an (*ACCEPT) is triggered inside capturing parentheses, it arranges
for those parentheses to be closed with whatever has been captured so far.
However, it was failing to mark any other groups between the hightest
capture so far and the currrent group as "unset". Thus, the ovector for
those groups contained whatever was previously there. An example is the
pattern /(x)|((*ACCEPT))/ when matched against "abcd".
2. If an assertion condition was quantified with a minimum of zero (an odd
thing to do, but it happened), SIGSEGV or other misbehaviour could occur.
3. If a pattern in pcretest input had the P (POSIX) modifier followed by an
unrecognized modifier, a crash could occur.
4. An attempt to do global matching in pcretest with a zero-length ovector
caused a crash.
5. Fixed a memory leak during matching that could occur for a subpattern
subroutine call (recursive or otherwise) if the number of captured groups
that had to be saved was greater than ten.
6. Catch a bad opcode during auto-possessification after compiling a bad UTF
string with NO_UTF_CHECK. This is a tidyup, not a bug fix, as passing bad
UTF with NO_UTF_CHECK is documented as having an undefined outcome.
7. A UTF pattern containing a "not" match of a non-ASCII character and a
subroutine reference could loop at compile time. Example: /[^\xff]((?1))/.
8. When a pattern is compiled, it remembers the highest back reference so that
when matching, if the ovector is too small, extra memory can be obtained to
use instead. A conditional subpattern whose condition is a check on a
capture having happened, such as, for example in the pattern
/^(?:(a)|b)(?(1)A|B)/, is another kind of back reference, but it was not
setting the highest backreference number. This mattered only if pcre_exec()
was called with an ovector that was too small to hold the capture, and there
was no other kind of back reference (a situation which is probably quite
rare). The effect of the bug was that the condition was always treated as
FALSE when the capture could not be consulted, leading to a incorrect
behaviour by pcre_exec(). This bug has been fixed.
9. A reference to a duplicated named group (either a back reference or a test
for being set in a conditional) that occurred in a part of the pattern where
PCRE_DUPNAMES was not set caused the amount of memory needed for the pattern
to be incorrectly calculated, leading to overwriting.
10. A mutually recursive set of back references such as (\2)(\1) caused a
segfault at study time (while trying to find the minimum matching length).
The infinite loop is now broken (with the minimum length unset, that is,
zero).
11. If an assertion that was used as a condition was quantified with a minimum
of zero, matching went wrong. In particular, if the whole group had
unlimited repetition and could match an empty string, a segfault was
likely. The pattern (?(?=0)?)+ is an example that caused this. Perl allows
assertions to be quantified, but not if they are being used as conditions,
so the above pattern is faulted by Perl. PCRE has now been changed so that
it also rejects such patterns.
12. A possessive capturing group such as (a)*+ with a minimum repeat of zero
failed to allow the zero-repeat case if pcre2_exec() was called with an
ovector too small to capture the group.
13. Fixed two bugs in pcretest that were discovered by fuzzing and reported by
Red Hat Product Security:
(a) A crash if /K and /F were both set with the option to save the compiled
pattern.
(b) Another crash if the option to print captured substrings in a callout
was combined with setting a null ovector, for example \O\C+ as a subject
string.
14. A pattern such as "((?2){0,1999}())?", which has a group containing a
forward reference repeated a large (but limited) number of times within a
repeated outer group that has a zero minimum quantifier, caused incorrect
code to be compiled, leading to the error "internal error:
previously-checked referenced subpattern not found" when an incorrect
memory address was read. This bug was reported as "heap overflow",
discovered by Kai Lu of Fortinet's FortiGuard Labs and given the CVE number
CVE-2015-2325.
23. A pattern such as "((?+1)(\1))/" containing a forward reference subroutine
call within a group that also contained a recursive back reference caused
incorrect code to be compiled. This bug was reported as "heap overflow",
discovered by Kai Lu of Fortinet's FortiGuard Labs, and given the CVE
number CVE-2015-2326.
24. Computing the size of the JIT read-only data in advance has been a source
of various issues, and new ones are still appear unfortunately. To fix
existing and future issues, size computation is eliminated from the code,
and replaced by on-demand memory allocation.
25. A pattern such as /(?i)[A-`]/, where characters in the other case are
adjacent to the end of the range, and the range contained characters with
more than one other case, caused incorrect behaviour when compiled in UTF
mode. In that example, the range a-j was left out of the class.
26. Fix JIT compilation of conditional blocks, which assertion
is converted to (*FAIL). E.g: /(?(?!))/.
27. The pattern /(?(?!)^)/ caused references to random memory. This bug was
discovered by the LLVM fuzzer.
28. The assertion (?!) is optimized to (*FAIL). This was not handled correctly
when this assertion was used as a condition, for example (?(?!)a|b). In
pcre2_match() it worked by luck; in pcre2_dfa_match() it gave an incorrect
error about an unsupported item.
29. For some types of pattern, for example /Z*(|d*){216}/, the auto-
possessification code could take exponential time to complete. A recursion
depth limit of 1000 has been imposed to limit the resources used by this
optimization.
30. A pattern such as /(*UTF)[\S\V\H]/, which contains a negated special class
such as \S in non-UCP mode, explicit wide characters (> 255) can be ignored
because \S ensures they are all in the class. The code for doing this was
interacting badly with the code for computing the amount of space needed to
compile the pattern, leading to a buffer overflow. This bug was discovered
by the LLVM fuzzer.
31. A pattern such as /((?2)+)((?1))/ which has mutual recursion nested inside
other kinds of group caused stack overflow at compile time. This bug was
discovered by the LLVM fuzzer.
32. A pattern such as /(?1)(?#?'){8}(a)/ which had a parenthesized comment
between a subroutine call and its quantifier was incorrectly compiled,
leading to buffer overflow or other errors. This bug was discovered by the
LLVM fuzzer.
33. The illegal pattern /(?(?<E>.*!.*)?)/ was not being diagnosed as missing an
assertion after (?(. The code was failing to check the character after
(?(?< for the ! or = that would indicate a lookbehind assertion. This bug
was discovered by the LLVM fuzzer.
34. A pattern such as /X((?2)()*+){2}+/ which has a possessive quantifier with
a fixed maximum following a group that contains a subroutine reference was
incorrectly compiled and could trigger buffer overflow. This bug was
discovered by the LLVM fuzzer.
35. A mutual recursion within a lookbehind assertion such as (?<=((?2))((?1)))
caused a stack overflow instead of the diagnosis of a non-fixed length
lookbehind assertion. This bug was discovered by the LLVM fuzzer.
36. The use of \K in a positive lookbehind assertion in a non-anchored pattern
(e.g. /(?<=\Ka)/) could make pcregrep loop.
37. There was a similar problem to 36 in pcretest for global matches.
38. If a greedy quantified \X was preceded by \C in UTF mode (e.g. \C\X*),
and a subsequent item in the pattern caused a non-match, backtracking over
the repeated \X did not stop, but carried on past the start of the subject,
causing reference to random memory and/or a segfault. There were also some
other cases where backtracking after \C could crash. This set of bugs was
discovered by the LLVM fuzzer.
39. The function for finding the minimum length of a matching string could take
a very long time if mutual recursion was present many times in a pattern,
for example, /((?2){73}(?2))((?1))/. A better mutual recursion detection
method has been implemented. This infelicity was discovered by the LLVM
fuzzer.
40. Static linking against the PCRE library using the pkg-config module was
failing on missing pthread symbols.
Version 8.36 26-September-2014
------------------------------

View File

@ -6,7 +6,8 @@ and semantics are as close as possible to those of the Perl 5 language.
Release 8 of PCRE is distributed under the terms of the "BSD" licence, as
specified below. The documentation for PCRE, supplied in the "doc"
directory, is distributed under the same terms as the software itself.
directory, is distributed under the same terms as the software itself. The data
in the testdata directory is not copyrighted and is in the public domain.
The basic library functions are written in C and are freestanding. Also
included in the distribution is a set of C++ wrapper functions, and a
@ -24,7 +25,7 @@ Email domain: cam.ac.uk
University of Cambridge Computing Service,
Cambridge, England.
Copyright (c) 1997-2014 University of Cambridge
Copyright (c) 1997-2015 University of Cambridge
All rights reserved.
@ -35,7 +36,7 @@ Written by: Zoltan Herczeg
Email local part: hzmester
Emain domain: freemail.hu
Copyright(c) 2010-2014 Zoltan Herczeg
Copyright(c) 2010-2015 Zoltan Herczeg
All rights reserved.
@ -46,7 +47,7 @@ Written by: Zoltan Herczeg
Email local part: hzmester
Emain domain: freemail.hu
Copyright(c) 2009-2014 Zoltan Herczeg
Copyright(c) 2009-2015 Zoltan Herczeg
All rights reserved.

View File

@ -1,6 +1,14 @@
News about PCRE releases
------------------------
Release 8.37 28-April-2015
--------------------------
This is bug-fix release. Note that this library (now called PCRE1) is now being
maintained for bug fixes only. New projects are advised to use the new PCRE2
libraries.
Release 8.36 26-September-2014
------------------------------

View File

@ -1,7 +1,16 @@
README file for PCRE (Perl-compatible regular expression library)
-----------------------------------------------------------------
The latest release of PCRE is always available in three alternative formats
NOTE: This set of files relates to PCRE releases that use the original API,
with library names libpcre, libpcre16, and libpcre32. January 2015 saw the
first release of a new API, known as PCRE2, with release numbers starting at
10.00 and library names libpcre2-8, libpcre2-16, and libpcre2-32. The old
libraries (now called PCRE1) are still being maintained for bug fixes, but
there will be no new development. New projects are advised to use the new PCRE2
libraries.
The latest release of PCRE1 is always available in three alternative formats
from:
ftp://ftp.csx.cam.ac.uk/pub/software/programming/pcre/pcre-xxx.tar.gz
@ -990,4 +999,4 @@ pcre_xxx, one with the name pcre16_xx, and a third with the name pcre32_xxx.
Philip Hazel
Email local part: ph10
Email domain: cam.ac.uk
Last updated: 24 October 2014
Last updated: 10 February 2015

View File

@ -395,7 +395,7 @@ them both to 0; an emulation function will be used. */
#undef SUPPORT_GCOV
/* Define to any value to enable support for Just-In-Time compiling. */
#undef SUPPORT_JIT
#define SUPPORT_JIT
/* Define to any value to allow pcregrep to be linked with libbz2, so that it
is able to handle .bz2 files. */

View File

@ -1704,6 +1704,7 @@ Arguments:
utf TRUE in UTF-8 / UTF-16 / UTF-32 mode
atend TRUE if called when the pattern is complete
cd the "compile data" structure
recurses chain of recurse_check to catch mutual recursion
Returns: the fixed length,
or -1 if there is no fixed length,
@ -1713,10 +1714,11 @@ Returns: the fixed length,
*/
static int
find_fixedlength(pcre_uchar *code, BOOL utf, BOOL atend, compile_data *cd)
find_fixedlength(pcre_uchar *code, BOOL utf, BOOL atend, compile_data *cd,
recurse_check *recurses)
{
int length = -1;
recurse_check this_recurse;
register int branchlength = 0;
register pcre_uchar *cc = code + 1 + LINK_SIZE;
@ -1741,7 +1743,8 @@ for (;;)
case OP_ONCE:
case OP_ONCE_NC:
case OP_COND:
d = find_fixedlength(cc + ((op == OP_CBRA)? IMM2_SIZE : 0), utf, atend, cd);
d = find_fixedlength(cc + ((op == OP_CBRA)? IMM2_SIZE : 0), utf, atend, cd,
recurses);
if (d < 0) return d;
branchlength += d;
do cc += GET(cc, 1); while (*cc == OP_ALT);
@ -1775,7 +1778,15 @@ for (;;)
cs = ce = (pcre_uchar *)cd->start_code + GET(cc, 1); /* Start subpattern */
do ce += GET(ce, 1); while (*ce == OP_ALT); /* End subpattern */
if (cc > cs && cc < ce) return -1; /* Recursion */
d = find_fixedlength(cs + IMM2_SIZE, utf, atend, cd);
else /* Check for mutual recursion */
{
recurse_check *r = recurses;
for (r = recurses; r != NULL; r = r->prev) if (r->group == cs) break;
if (r != NULL) return -1; /* Mutual recursion */
}
this_recurse.prev = recurses;
this_recurse.group = cs;
d = find_fixedlength(cs + IMM2_SIZE, utf, atend, cd, &this_recurse);
if (d < 0) return d;
branchlength += d;
cc += 1 + LINK_SIZE;
@ -2129,32 +2140,60 @@ for (;;)
{
case OP_CHAR:
case OP_CHARI:
case OP_NOT:
case OP_NOTI:
case OP_EXACT:
case OP_EXACTI:
case OP_NOTEXACT:
case OP_NOTEXACTI:
case OP_UPTO:
case OP_UPTOI:
case OP_NOTUPTO:
case OP_NOTUPTOI:
case OP_MINUPTO:
case OP_MINUPTOI:
case OP_NOTMINUPTO:
case OP_NOTMINUPTOI:
case OP_POSUPTO:
case OP_POSUPTOI:
case OP_NOTPOSUPTO:
case OP_NOTPOSUPTOI:
case OP_STAR:
case OP_STARI:
case OP_NOTSTAR:
case OP_NOTSTARI:
case OP_MINSTAR:
case OP_MINSTARI:
case OP_NOTMINSTAR:
case OP_NOTMINSTARI:
case OP_POSSTAR:
case OP_POSSTARI:
case OP_NOTPOSSTAR:
case OP_NOTPOSSTARI:
case OP_PLUS:
case OP_PLUSI:
case OP_NOTPLUS:
case OP_NOTPLUSI:
case OP_MINPLUS:
case OP_MINPLUSI:
case OP_NOTMINPLUS:
case OP_NOTMINPLUSI:
case OP_POSPLUS:
case OP_POSPLUSI:
case OP_NOTPOSPLUS:
case OP_NOTPOSPLUSI:
case OP_QUERY:
case OP_QUERYI:
case OP_NOTQUERY:
case OP_NOTQUERYI:
case OP_MINQUERY:
case OP_MINQUERYI:
case OP_NOTMINQUERY:
case OP_NOTMINQUERYI:
case OP_POSQUERY:
case OP_POSQUERYI:
case OP_NOTPOSQUERY:
case OP_NOTPOSQUERYI:
if (HAS_EXTRALEN(code[-1])) code += GET_EXTRALEN(code[-1]);
break;
}
@ -2334,11 +2373,6 @@ Arguments:
Returns: TRUE if what is matched could be empty
*/
typedef struct recurse_check {
struct recurse_check *prev;
const pcre_uchar *group;
} recurse_check;
static BOOL
could_be_empty_branch(const pcre_uchar *code, const pcre_uchar *endcode,
BOOL utf, compile_data *cd, recurse_check *recurses)
@ -2469,8 +2503,8 @@ for (code = first_significant_code(code + PRIV(OP_lengths)[*code], TRUE);
empty_branch = FALSE;
do
{
if (!empty_branch && could_be_empty_branch(code, endcode, utf, cd, NULL))
empty_branch = TRUE;
if (!empty_branch && could_be_empty_branch(code, endcode, utf, cd,
recurses)) empty_branch = TRUE;
code += GET(code, 1);
}
while (*code == OP_ALT);
@ -3065,7 +3099,7 @@ Returns: TRUE if the auto-possessification is possible
static BOOL
compare_opcodes(const pcre_uchar *code, BOOL utf, const compile_data *cd,
const pcre_uint32 *base_list, const pcre_uchar *base_end)
const pcre_uint32 *base_list, const pcre_uchar *base_end, int *rec_limit)
{
pcre_uchar c;
pcre_uint32 list[8];
@ -3082,6 +3116,9 @@ pcre_uint32 chr;
BOOL accepted, invert_bits;
BOOL entered_a_group = FALSE;
if (*rec_limit == 0) return FALSE;
--(*rec_limit);
/* Note: the base_list[1] contains whether the current opcode has greedy
(represented by a non-zero value) quantifier. This is a different from
other character type lists, which stores here that the character iterator
@ -3152,7 +3189,8 @@ for(;;)
while (*next_code == OP_ALT)
{
if (!compare_opcodes(code, utf, cd, base_list, base_end)) return FALSE;
if (!compare_opcodes(code, utf, cd, base_list, base_end, rec_limit))
return FALSE;
code = next_code + 1 + LINK_SIZE;
next_code += GET(next_code, 1);
}
@ -3172,7 +3210,7 @@ for(;;)
/* The bracket content will be checked by the
OP_BRA/OP_CBRA case above. */
next_code += 1 + LINK_SIZE;
if (!compare_opcodes(next_code, utf, cd, base_list, base_end))
if (!compare_opcodes(next_code, utf, cd, base_list, base_end, rec_limit))
return FALSE;
code += PRIV(OP_lengths)[c];
@ -3605,11 +3643,20 @@ register pcre_uchar c;
const pcre_uchar *end;
pcre_uchar *repeat_opcode;
pcre_uint32 list[8];
int rec_limit;
for (;;)
{
c = *code;
/* When a pattern with bad UTF-8 encoding is compiled with NO_UTF_CHECK,
it may compile without complaining, but may get into a loop here if the code
pointer points to a bad value. This is, of course a documentated possibility,
when NO_UTF_CHECK is set, so it isn't a bug, but we can detect this case and
just give up on this optimization. */
if (c >= OP_TABLE_LENGTH) return;
if (c >= OP_STAR && c <= OP_TYPEPOSUPTO)
{
c -= get_repeat_base(c) - OP_STAR;
@ -3617,7 +3664,8 @@ for (;;)
get_chr_property_list(code, utf, cd->fcc, list) : NULL;
list[1] = c == OP_STAR || c == OP_PLUS || c == OP_QUERY || c == OP_UPTO;
if (end != NULL && compare_opcodes(end, utf, cd, list, end))
rec_limit = 1000;
if (end != NULL && compare_opcodes(end, utf, cd, list, end, &rec_limit))
{
switch(c)
{
@ -3673,7 +3721,8 @@ for (;;)
list[1] = (c & 1) == 0;
if (compare_opcodes(end, utf, cd, list, end))
rec_limit = 1000;
if (compare_opcodes(end, utf, cd, list, end, &rec_limit))
{
switch (c)
{
@ -3947,14 +3996,14 @@ Arguments:
adjust the amount by which the group is to be moved
utf TRUE in UTF-8 / UTF-16 / UTF-32 mode
cd contains pointers to tables etc.
save_hwm the hwm forward reference pointer at the start of the group
save_hwm_offset the hwm forward reference offset at the start of the group
Returns: nothing
*/
static void
adjust_recurse(pcre_uchar *group, int adjust, BOOL utf, compile_data *cd,
pcre_uchar *save_hwm)
size_t save_hwm_offset)
{
pcre_uchar *ptr = group;
@ -3966,7 +4015,8 @@ while ((ptr = (pcre_uchar *)find_recurse(ptr, utf)) != NULL)
/* See if this recursion is on the forward reference list. If so, adjust the
reference. */
for (hc = save_hwm; hc < cd->hwm; hc += LINK_SIZE)
for (hc = (pcre_uchar *)cd->start_workspace + save_hwm_offset; hc < cd->hwm;
hc += LINK_SIZE)
{
offset = (int)GET(hc, 0);
if (cd->start_code + offset == ptr + 1)
@ -4171,7 +4221,11 @@ if ((options & PCRE_CASELESS) != 0)
range. Otherwise, use a recursive call to add the additional range. */
else if (oc < start && od >= start - 1) start = oc; /* Extend downwards */
else if (od > end && oc <= end + 1) end = od; /* Extend upwards */
else if (od > end && oc <= end + 1)
{
end = od; /* Extend upwards */
if (end > classbits_end) classbits_end = (end <= 0xff ? end : 0xff);
}
else n8 += add_to_class(classbits, uchardptr, options, cd, oc, od);
}
}
@ -4411,7 +4465,7 @@ const pcre_uchar *tempptr;
const pcre_uchar *nestptr = NULL;
pcre_uchar *previous = NULL;
pcre_uchar *previous_callout = NULL;
pcre_uchar *save_hwm = NULL;
size_t save_hwm_offset = 0;
pcre_uint8 classbits[32];
/* We can fish out the UTF-8 setting once and for all into a BOOL, but we
@ -5470,6 +5524,12 @@ for (;; ptr++)
PUT(previous, 1, (int)(code - previous));
break; /* End of class handling */
}
/* Even though any XCLASS list is now discarded, we must allow for
its memory. */
if (lengthptr != NULL)
*lengthptr += (int)(class_uchardata - class_uchardata_base);
#endif
/* If there are no characters > 255, or they are all to be included or
@ -5870,6 +5930,7 @@ for (;; ptr++)
{
register int i;
int len = (int)(code - previous);
size_t base_hwm_offset = save_hwm_offset;
pcre_uchar *bralink = NULL;
pcre_uchar *brazeroptr = NULL;
@ -5924,7 +5985,7 @@ for (;; ptr++)
if (repeat_max <= 1) /* Covers 0, 1, and unlimited */
{
*code = OP_END;
adjust_recurse(previous, 1, utf, cd, save_hwm);
adjust_recurse(previous, 1, utf, cd, save_hwm_offset);
memmove(previous + 1, previous, IN_UCHARS(len));
code++;
if (repeat_max == 0)
@ -5948,7 +6009,7 @@ for (;; ptr++)
{
int offset;
*code = OP_END;
adjust_recurse(previous, 2 + LINK_SIZE, utf, cd, save_hwm);
adjust_recurse(previous, 2 + LINK_SIZE, utf, cd, save_hwm_offset);
memmove(previous + 2 + LINK_SIZE, previous, IN_UCHARS(len));
code += 2 + LINK_SIZE;
*previous++ = OP_BRAZERO + repeat_type;
@ -6011,26 +6072,25 @@ for (;; ptr++)
for (i = 1; i < repeat_min; i++)
{
pcre_uchar *hc;
pcre_uchar *this_hwm = cd->hwm;
size_t this_hwm_offset = cd->hwm - cd->start_workspace;
memcpy(code, previous, IN_UCHARS(len));
while (cd->hwm > cd->start_workspace + cd->workspace_size -
WORK_SIZE_SAFETY_MARGIN - (this_hwm - save_hwm))
WORK_SIZE_SAFETY_MARGIN -
(this_hwm_offset - base_hwm_offset))
{
size_t save_offset = save_hwm - cd->start_workspace;
size_t this_offset = this_hwm - cd->start_workspace;
*errorcodeptr = expand_workspace(cd);
if (*errorcodeptr != 0) goto FAILED;
save_hwm = (pcre_uchar *)cd->start_workspace + save_offset;
this_hwm = (pcre_uchar *)cd->start_workspace + this_offset;
}
for (hc = save_hwm; hc < this_hwm; hc += LINK_SIZE)
for (hc = (pcre_uchar *)cd->start_workspace + base_hwm_offset;
hc < (pcre_uchar *)cd->start_workspace + this_hwm_offset;
hc += LINK_SIZE)
{
PUT(cd->hwm, 0, GET(hc, 0) + len);
cd->hwm += LINK_SIZE;
}
save_hwm = this_hwm;
base_hwm_offset = this_hwm_offset;
code += len;
}
}
@ -6075,7 +6135,7 @@ for (;; ptr++)
else for (i = repeat_max - 1; i >= 0; i--)
{
pcre_uchar *hc;
pcre_uchar *this_hwm = cd->hwm;
size_t this_hwm_offset = cd->hwm - cd->start_workspace;
*code++ = OP_BRAZERO + repeat_type;
@ -6097,22 +6157,21 @@ for (;; ptr++)
copying them. */
while (cd->hwm > cd->start_workspace + cd->workspace_size -
WORK_SIZE_SAFETY_MARGIN - (this_hwm - save_hwm))
WORK_SIZE_SAFETY_MARGIN -
(this_hwm_offset - base_hwm_offset))
{
size_t save_offset = save_hwm - cd->start_workspace;
size_t this_offset = this_hwm - cd->start_workspace;
*errorcodeptr = expand_workspace(cd);
if (*errorcodeptr != 0) goto FAILED;
save_hwm = (pcre_uchar *)cd->start_workspace + save_offset;
this_hwm = (pcre_uchar *)cd->start_workspace + this_offset;
}
for (hc = save_hwm; hc < this_hwm; hc += LINK_SIZE)
for (hc = (pcre_uchar *)cd->start_workspace + base_hwm_offset;
hc < (pcre_uchar *)cd->start_workspace + this_hwm_offset;
hc += LINK_SIZE)
{
PUT(cd->hwm, 0, GET(hc, 0) + len + ((i != 0)? 2+LINK_SIZE : 1));
cd->hwm += LINK_SIZE;
}
save_hwm = this_hwm;
base_hwm_offset = this_hwm_offset;
code += len;
}
@ -6208,7 +6267,7 @@ for (;; ptr++)
{
int nlen = (int)(code - bracode);
*code = OP_END;
adjust_recurse(bracode, 1 + LINK_SIZE, utf, cd, save_hwm);
adjust_recurse(bracode, 1 + LINK_SIZE, utf, cd, save_hwm_offset);
memmove(bracode + 1 + LINK_SIZE, bracode, IN_UCHARS(nlen));
code += 1 + LINK_SIZE;
nlen += 1 + LINK_SIZE;
@ -6342,7 +6401,7 @@ for (;; ptr++)
else
{
*code = OP_END;
adjust_recurse(tempcode, 1 + LINK_SIZE, utf, cd, save_hwm);
adjust_recurse(tempcode, 1 + LINK_SIZE, utf, cd, save_hwm_offset);
memmove(tempcode + 1 + LINK_SIZE, tempcode, IN_UCHARS(len));
code += 1 + LINK_SIZE;
len += 1 + LINK_SIZE;
@ -6391,7 +6450,7 @@ for (;; ptr++)
default:
*code = OP_END;
adjust_recurse(tempcode, 1 + LINK_SIZE, utf, cd, save_hwm);
adjust_recurse(tempcode, 1 + LINK_SIZE, utf, cd, save_hwm_offset);
memmove(tempcode + 1 + LINK_SIZE, tempcode, IN_UCHARS(len));
code += 1 + LINK_SIZE;
len += 1 + LINK_SIZE;
@ -6420,15 +6479,25 @@ for (;; ptr++)
parenthesis forms. */
case CHAR_LEFT_PARENTHESIS:
newoptions = options;
skipbytes = 0;
bravalue = OP_CBRA;
save_hwm = cd->hwm;
reset_bracount = FALSE;
/* First deal with various "verbs" that can be introduced by '*'. */
ptr++;
/* First deal with comments. Putting this code right at the start ensures
that comments have no bad side effects. */
if (ptr[0] == CHAR_QUESTION_MARK && ptr[1] == CHAR_NUMBER_SIGN)
{
ptr += 2;
while (*ptr != CHAR_NULL && *ptr != CHAR_RIGHT_PARENTHESIS) ptr++;
if (*ptr == CHAR_NULL)
{
*errorcodeptr = ERR18;
goto FAILED;
}
continue;
}
/* Now deal with various "verbs" that can be introduced by '*'. */
if (ptr[0] == CHAR_ASTERISK && (ptr[1] == ':'
|| (MAX_255(ptr[1]) && ((cd->ctypes[ptr[1]] & ctype_letter) != 0))))
{
@ -6549,10 +6618,18 @@ for (;; ptr++)
goto FAILED;
}
/* Initialize for "real" parentheses */
newoptions = options;
skipbytes = 0;
bravalue = OP_CBRA;
save_hwm_offset = cd->hwm - cd->start_workspace;
reset_bracount = FALSE;
/* Deal with the extended parentheses; all are introduced by '?', and the
appearance of any of them means that this is not a capturing group. */
else if (*ptr == CHAR_QUESTION_MARK)
if (*ptr == CHAR_QUESTION_MARK)
{
int i, set, unset, namelen;
int *optset;
@ -6561,17 +6638,6 @@ for (;; ptr++)
switch (*(++ptr))
{
case CHAR_NUMBER_SIGN: /* Comment; skip to ket */
ptr++;
while (*ptr != CHAR_NULL && *ptr != CHAR_RIGHT_PARENTHESIS) ptr++;
if (*ptr == CHAR_NULL)
{
*errorcodeptr = ERR18;
goto FAILED;
}
continue;
/* ------------------------------------------------------------ */
case CHAR_VERTICAL_LINE: /* Reset capture count for each branch */
reset_bracount = TRUE;
@ -6620,8 +6686,13 @@ for (;; ptr++)
if (tempptr[1] == CHAR_QUESTION_MARK &&
(tempptr[2] == CHAR_EQUALS_SIGN ||
tempptr[2] == CHAR_EXCLAMATION_MARK ||
tempptr[2] == CHAR_LESS_THAN_SIGN))
(tempptr[2] == CHAR_LESS_THAN_SIGN &&
(tempptr[3] == CHAR_EQUALS_SIGN ||
tempptr[3] == CHAR_EXCLAMATION_MARK))))
{
cd->iscondassert = TRUE;
break;
}
/* Other conditions use OP_CREF/OP_DNCREF/OP_RREF/OP_DNRREF, and all
need to skip at least 1+IMM2_SIZE bytes at the start of the group. */
@ -6698,8 +6769,7 @@ for (;; ptr++)
ptr++;
}
namelen = (int)(ptr - name);
if (lengthptr != NULL && (options & PCRE_DUPNAMES) != 0)
*lengthptr += IMM2_SIZE;
if (lengthptr != NULL) *lengthptr += IMM2_SIZE;
}
/* Check the terminator */
@ -6735,6 +6805,7 @@ for (;; ptr++)
goto FAILED;
}
PUT2(code, 2+LINK_SIZE, recno);
if (recno > cd->top_backref) cd->top_backref = recno;
break;
}
@ -6757,6 +6828,7 @@ for (;; ptr++)
int offset = i++;
int count = 1;
recno = GET2(slot, 0); /* Number from first found */
if (recno > cd->top_backref) cd->top_backref = recno;
for (; i < cd->names_found; i++)
{
slot += cd->name_entry_size;
@ -7114,11 +7186,11 @@ for (;; ptr++)
if (!is_recurse) cd->namedrefcount++;
/* If duplicate names are permitted, we have to allow for a named
reference to a duplicated name (this cannot be determined until the
second pass). This needs an extra 16-bit data item. */
/* We have to allow for a named reference to a duplicated name (this
cannot be determined until the second pass). This needs an extra
16-bit data item. */
if ((options & PCRE_DUPNAMES) != 0) *lengthptr += IMM2_SIZE;
*lengthptr += IMM2_SIZE;
}
/* In the real compile, search the name table. We check the name
@ -7475,12 +7547,22 @@ for (;; ptr++)
goto FAILED;
}
/* Assertions used not to be repeatable, but this was changed for Perl
compatibility, so all kinds can now be repeated. We copy code into a
/* All assertions used not to be repeatable, but this was changed for Perl
compatibility. All kinds can now be repeated except for assertions that are
conditions (Perl also forbids these to be repeated). We copy code into a
non-register variable (tempcode) in order to be able to pass its address
because some compilers complain otherwise. */
because some compilers complain otherwise. At the start of a conditional
group whose condition is an assertion, cd->iscondassert is set. We unset it
here so as to allow assertions later in the group to be quantified. */
if (bravalue >= OP_ASSERT && bravalue <= OP_ASSERTBACK_NOT &&
cd->iscondassert)
{
previous = NULL;
cd->iscondassert = FALSE;
}
else previous = code;
previous = code; /* For handling repetition */
*code = bravalue;
tempcode = code;
tempreqvary = cd->req_varyopt; /* Save value before bracket */
@ -7727,7 +7809,7 @@ for (;; ptr++)
const pcre_uchar *p;
pcre_uint32 cf;
save_hwm = cd->hwm; /* Normally this is set when '(' is read */
save_hwm_offset = cd->hwm - cd->start_workspace; /* Normally this is set when '(' is read */
terminator = (*(++ptr) == CHAR_LESS_THAN_SIGN)?
CHAR_GREATER_THAN_SIGN : CHAR_APOSTROPHE;
@ -8054,6 +8136,7 @@ int length;
unsigned int orig_bracount;
unsigned int max_bracount;
branch_chain bc;
size_t save_hwm_offset;
/* If set, call the external function that checks for stack availability. */
@ -8071,6 +8154,8 @@ bc.current_branch = code;
firstchar = reqchar = 0;
firstcharflags = reqcharflags = REQ_UNSET;
save_hwm_offset = cd->hwm - cd->start_workspace;
/* Accumulate the length for use in the pre-compile phase. Start with the
length of the BRA and KET and any extra bytes that are required at the
beginning. We accumulate in a local variable to save frequent testing of
@ -8212,7 +8297,7 @@ for (;;)
int fixed_length;
*code = OP_END;
fixed_length = find_fixedlength(last_branch, (options & PCRE_UTF8) != 0,
FALSE, cd);
FALSE, cd, NULL);
DPRINTF(("fixed length = %d\n", fixed_length));
if (fixed_length == -3)
{
@ -8273,7 +8358,7 @@ for (;;)
{
*code = OP_END;
adjust_recurse(start_bracket, 1 + LINK_SIZE,
(options & PCRE_UTF8) != 0, cd, cd->hwm);
(options & PCRE_UTF8) != 0, cd, save_hwm_offset);
memmove(start_bracket + 1 + LINK_SIZE, start_bracket,
IN_UCHARS(code - start_bracket));
*start_bracket = OP_ONCE;
@ -8497,6 +8582,7 @@ do {
case OP_RREF:
case OP_DNRREF:
case OP_DEF:
case OP_FAIL:
return FALSE;
default: /* Assertion */
@ -9081,6 +9167,7 @@ cd->dupnames = FALSE;
cd->namedrefcount = 0;
cd->start_code = cworkspace;
cd->hwm = cworkspace;
cd->iscondassert = FALSE;
cd->start_workspace = cworkspace;
cd->workspace_size = COMPILE_WORK_SIZE;
cd->named_groups = named_groups;
@ -9118,13 +9205,6 @@ if (length > MAX_PATTERN_SIZE)
goto PCRE_EARLY_ERROR_RETURN;
}
/* If there are groups with duplicate names and there are also references by
name, we must allow for the possibility of named references to duplicated
groups. These require an extra data item each. */
if (cd->dupnames && cd->namedrefcount > 0)
length += cd->namedrefcount * IMM2_SIZE * sizeof(pcre_uchar);
/* Compute the size of the data block for storing the compiled pattern. Integer
overflow should no longer be possible because nowadays we limit the maximum
value of cd->names_found and cd->name_entry_size. */
@ -9183,6 +9263,7 @@ cd->name_table = (pcre_uchar *)re + re->name_table_offset;
codestart = cd->name_table + re->name_entry_size * re->name_count;
cd->start_code = codestart;
cd->hwm = (pcre_uchar *)(cd->start_workspace);
cd->iscondassert = FALSE;
cd->req_varyopt = 0;
cd->had_accept = FALSE;
cd->had_pruneorskip = FALSE;
@ -9319,7 +9400,7 @@ if (cd->check_lookbehind)
int end_op = *be;
*be = OP_END;
fixed_length = find_fixedlength(cc, (re->options & PCRE_UTF8) != 0, TRUE,
cd);
cd, NULL);
*be = end_op;
DPRINTF(("fixed length = %d\n", fixed_length));
if (fixed_length < 0)

View File

@ -1136,93 +1136,81 @@ for (;;)
printf("\n");
#endif
if (offset < md->offset_max)
if (offset >= md->offset_max) goto POSSESSIVE_NON_CAPTURE;
matched_once = FALSE;
code_offset = (int)(ecode - md->start_code);
save_offset1 = md->offset_vector[offset];
save_offset2 = md->offset_vector[offset+1];
save_offset3 = md->offset_vector[md->offset_end - number];
save_capture_last = md->capture_last;
DPRINTF(("saving %d %d %d\n", save_offset1, save_offset2, save_offset3));
/* Each time round the loop, save the current subject position for use
when the group matches. For MATCH_MATCH, the group has matched, so we
restart it with a new subject starting position, remembering that we had
at least one match. For MATCH_NOMATCH, carry on with the alternatives, as
usual. If we haven't matched any alternatives in any iteration, check to
see if a previous iteration matched. If so, the group has matched;
continue from afterwards. Otherwise it has failed; restore the previous
capture values before returning NOMATCH. */
for (;;)
{
matched_once = FALSE;
code_offset = (int)(ecode - md->start_code);
save_offset1 = md->offset_vector[offset];
save_offset2 = md->offset_vector[offset+1];
save_offset3 = md->offset_vector[md->offset_end - number];
save_capture_last = md->capture_last;
DPRINTF(("saving %d %d %d\n", save_offset1, save_offset2, save_offset3));
/* Each time round the loop, save the current subject position for use
when the group matches. For MATCH_MATCH, the group has matched, so we
restart it with a new subject starting position, remembering that we had
at least one match. For MATCH_NOMATCH, carry on with the alternatives, as
usual. If we haven't matched any alternatives in any iteration, check to
see if a previous iteration matched. If so, the group has matched;
continue from afterwards. Otherwise it has failed; restore the previous
capture values before returning NOMATCH. */
for (;;)
md->offset_vector[md->offset_end - number] =
(int)(eptr - md->start_subject);
if (op >= OP_SBRA) md->match_function_type = MATCH_CBEGROUP;
RMATCH(eptr, ecode + PRIV(OP_lengths)[*ecode], offset_top, md,
eptrb, RM63);
if (rrc == MATCH_KETRPOS)
{
md->offset_vector[md->offset_end - number] =
(int)(eptr - md->start_subject);
if (op >= OP_SBRA) md->match_function_type = MATCH_CBEGROUP;
RMATCH(eptr, ecode + PRIV(OP_lengths)[*ecode], offset_top, md,
eptrb, RM63);
if (rrc == MATCH_KETRPOS)
offset_top = md->end_offset_top;
ecode = md->start_code + code_offset;
save_capture_last = md->capture_last;
matched_once = TRUE;
mstart = md->start_match_ptr; /* In case \K changed it */
if (eptr == md->end_match_ptr) /* Matched an empty string */
{
offset_top = md->end_offset_top;
ecode = md->start_code + code_offset;
save_capture_last = md->capture_last;
matched_once = TRUE;
mstart = md->start_match_ptr; /* In case \K changed it */
if (eptr == md->end_match_ptr) /* Matched an empty string */
{
do ecode += GET(ecode, 1); while (*ecode == OP_ALT);
break;
}
eptr = md->end_match_ptr;
continue;
do ecode += GET(ecode, 1); while (*ecode == OP_ALT);
break;
}
/* See comment in the code for capturing groups above about handling
THEN. */
if (rrc == MATCH_THEN)
{
next = ecode + GET(ecode,1);
if (md->start_match_ptr < next &&
(*ecode == OP_ALT || *next == OP_ALT))
rrc = MATCH_NOMATCH;
}
if (rrc != MATCH_NOMATCH) RRETURN(rrc);
md->capture_last = save_capture_last;
ecode += GET(ecode, 1);
if (*ecode != OP_ALT) break;
eptr = md->end_match_ptr;
continue;
}
if (!matched_once)
/* See comment in the code for capturing groups above about handling
THEN. */
if (rrc == MATCH_THEN)
{
md->offset_vector[offset] = save_offset1;
md->offset_vector[offset+1] = save_offset2;
md->offset_vector[md->offset_end - number] = save_offset3;
next = ecode + GET(ecode,1);
if (md->start_match_ptr < next &&
(*ecode == OP_ALT || *next == OP_ALT))
rrc = MATCH_NOMATCH;
}
if (allow_zero || matched_once)
{
ecode += 1 + LINK_SIZE;
break;
}
RRETURN(MATCH_NOMATCH);
if (rrc != MATCH_NOMATCH) RRETURN(rrc);
md->capture_last = save_capture_last;
ecode += GET(ecode, 1);
if (*ecode != OP_ALT) break;
}
/* FALL THROUGH ... Insufficient room for saving captured contents. Treat
as a non-capturing bracket. */
if (!matched_once)
{
md->offset_vector[offset] = save_offset1;
md->offset_vector[offset+1] = save_offset2;
md->offset_vector[md->offset_end - number] = save_offset3;
}
/* VVVVVVVVVVVVVVVVVVVVVVVVV */
/* VVVVVVVVVVVVVVVVVVVVVVVVV */
if (allow_zero || matched_once)
{
ecode += 1 + LINK_SIZE;
break;
}
DPRINTF(("insufficient capture room: treat as non-capturing\n"));
/* VVVVVVVVVVVVVVVVVVVVVVVVV */
/* VVVVVVVVVVVVVVVVVVVVVVVVV */
RRETURN(MATCH_NOMATCH);
/* Non-capturing possessive bracket with unlimited repeat. We come here
from BRAZERO with allow_zero = TRUE. The code is similar to the above,
@ -1388,6 +1376,7 @@ for (;;)
break;
case OP_DEF: /* DEFINE - always false */
case OP_FAIL: /* From optimized (?!) condition */
break;
/* The condition is an assertion. Call match() to evaluate it - setting
@ -1404,8 +1393,11 @@ for (;;)
condition = TRUE;
/* Advance ecode past the assertion to the start of the first branch,
but adjust it so that the general choosing code below works. */
but adjust it so that the general choosing code below works. If the
assertion has a quantifier that allows zero repeats we must skip over
the BRAZERO. This is a lunatic thing to do, but somebody did! */
if (*ecode == OP_BRAZERO) ecode++;
ecode += GET(ecode, 1);
while (*ecode == OP_ALT) ecode += GET(ecode, 1);
ecode += 1 + LINK_SIZE - PRIV(OP_lengths)[condcode];
@ -1474,7 +1466,18 @@ for (;;)
md->offset_vector[offset] =
md->offset_vector[md->offset_end - number];
md->offset_vector[offset+1] = (int)(eptr - md->start_subject);
if (offset_top <= offset) offset_top = offset + 2;
/* If this group is at or above the current highwater mark, ensure that
any groups between the current high water mark and this group are marked
unset and then update the high water mark. */
if (offset >= offset_top)
{
register int *iptr = md->offset_vector + offset_top;
register int *iend = md->offset_vector + offset;
while (iptr < iend) *iptr++ = -1;
offset_top = offset + 2;
}
}
ecode += 1 + IMM2_SIZE;
break;
@ -1826,7 +1829,11 @@ for (;;)
are defined in a range that can be tested for. */
if (rrc >= MATCH_BACKTRACK_MIN && rrc <= MATCH_BACKTRACK_MAX)
{
if (new_recursive.offset_save != stacksave)
(PUBL(free))(new_recursive.offset_save);
RRETURN(MATCH_NOMATCH);
}
/* Any return code other than NOMATCH is an error. */
@ -3476,7 +3483,7 @@ for (;;)
if (possessive) continue; /* No backtracking */
for(;;)
{
if (eptr == pp) goto TAIL_RECURSE;
if (eptr <= pp) goto TAIL_RECURSE;
RMATCH(eptr, ecode, offset_top, md, eptrb, RM23);
if (rrc != MATCH_NOMATCH) RRETURN(rrc);
#ifdef SUPPORT_UCP
@ -3897,7 +3904,7 @@ for (;;)
if (possessive) continue; /* No backtracking */
for(;;)
{
if (eptr == pp) goto TAIL_RECURSE;
if (eptr <= pp) goto TAIL_RECURSE;
RMATCH(eptr, ecode, offset_top, md, eptrb, RM30);
if (rrc != MATCH_NOMATCH) RRETURN(rrc);
eptr--;
@ -4032,7 +4039,7 @@ for (;;)
if (possessive) continue; /* No backtracking */
for(;;)
{
if (eptr == pp) goto TAIL_RECURSE;
if (eptr <= pp) goto TAIL_RECURSE;
RMATCH(eptr, ecode, offset_top, md, eptrb, RM34);
if (rrc != MATCH_NOMATCH) RRETURN(rrc);
eptr--;
@ -5603,7 +5610,7 @@ for (;;)
if (possessive) continue; /* No backtracking */
for(;;)
{
if (eptr == pp) goto TAIL_RECURSE;
if (eptr <= pp) goto TAIL_RECURSE;
RMATCH(eptr, ecode, offset_top, md, eptrb, RM44);
if (rrc != MATCH_NOMATCH) RRETURN(rrc);
eptr--;
@ -5645,12 +5652,17 @@ for (;;)
if (possessive) continue; /* No backtracking */
/* We use <= pp rather than == pp to detect the start of the run while
backtracking because the use of \C in UTF mode can cause BACKCHAR to
move back past pp. This is just palliative; the use of \C in UTF mode
is fraught with danger. */
for(;;)
{
int lgb, rgb;
PCRE_PUCHAR fptr;
if (eptr == pp) goto TAIL_RECURSE; /* At start of char run */
if (eptr <= pp) goto TAIL_RECURSE; /* At start of char run */
RMATCH(eptr, ecode, offset_top, md, eptrb, RM45);
if (rrc != MATCH_NOMATCH) RRETURN(rrc);
@ -5668,7 +5680,7 @@ for (;;)
for (;;)
{
if (eptr == pp) goto TAIL_RECURSE; /* At start of char run */
if (eptr <= pp) goto TAIL_RECURSE; /* At start of char run */
fptr = eptr - 1;
if (!utf) c = *fptr; else
{
@ -5918,7 +5930,7 @@ for (;;)
if (possessive) continue; /* No backtracking */
for(;;)
{
if (eptr == pp) goto TAIL_RECURSE;
if (eptr <= pp) goto TAIL_RECURSE;
RMATCH(eptr, ecode, offset_top, md, eptrb, RM46);
if (rrc != MATCH_NOMATCH) RRETURN(rrc);
eptr--;

View File

@ -2450,6 +2450,7 @@ typedef struct compile_data {
BOOL had_pruneorskip; /* (*PRUNE) or (*SKIP) encountered */
BOOL check_lookbehind; /* Lookbehinds need later checking */
BOOL dupnames; /* Duplicate names exist */
BOOL iscondassert; /* Next assert is a condition */
int nltype; /* Newline type */
int nllen; /* Newline string length */
pcre_uchar nl[4]; /* Newline string when fixed length */
@ -2463,6 +2464,13 @@ typedef struct branch_chain {
pcre_uchar *current_branch;
} branch_chain;
/* Structure for mutual recursion detection. */
typedef struct recurse_check {
struct recurse_check *prev;
const pcre_uchar *group;
} recurse_check;
/* Structure for items in a linked list that represents an explicit recursive
call within the pattern; used by pcre_exec(). */

File diff suppressed because it is too large Load Diff

View File

@ -70,7 +70,7 @@ Arguments:
code pointer to start of group (the bracket)
startcode pointer to start of the whole pattern's code
options the compiling options
int RECURSE depth
recurses chain of recurse_check to catch mutual recursion
Returns: the minimum length
-1 if \C in UTF-8 mode or (*ACCEPT) was encountered
@ -80,12 +80,13 @@ Returns: the minimum length
static int
find_minlength(const REAL_PCRE *re, const pcre_uchar *code,
const pcre_uchar *startcode, int options, int recurse_depth)
const pcre_uchar *startcode, int options, recurse_check *recurses)
{
int length = -1;
/* PCRE_UTF16 has the same value as PCRE_UTF8. */
BOOL utf = (options & PCRE_UTF8) != 0;
BOOL had_recurse = FALSE;
recurse_check this_recurse;
register int branchlength = 0;
register pcre_uchar *cc = (pcre_uchar *)code + 1 + LINK_SIZE;
@ -130,7 +131,7 @@ for (;;)
case OP_SBRAPOS:
case OP_ONCE:
case OP_ONCE_NC:
d = find_minlength(re, cc, startcode, options, recurse_depth);
d = find_minlength(re, cc, startcode, options, recurses);
if (d < 0) return d;
branchlength += d;
do cc += GET(cc, 1); while (*cc == OP_ALT);
@ -393,7 +394,7 @@ for (;;)
ce = cs = (pcre_uchar *)PRIV(find_bracket)(startcode, utf, GET2(slot, 0));
if (cs == NULL) return -2;
do ce += GET(ce, 1); while (*ce == OP_ALT);
if (cc > cs && cc < ce)
if (cc > cs && cc < ce) /* Simple recursion */
{
d = 0;
had_recurse = TRUE;
@ -401,8 +402,22 @@ for (;;)
}
else
{
int dd = find_minlength(re, cs, startcode, options, recurse_depth);
if (dd < d) d = dd;
recurse_check *r = recurses;
for (r = recurses; r != NULL; r = r->prev) if (r->group == cs) break;
if (r != NULL) /* Mutual recursion */
{
d = 0;
had_recurse = TRUE;
break;
}
else
{
int dd;
this_recurse.prev = recurses;
this_recurse.group = cs;
dd = find_minlength(re, cs, startcode, options, &this_recurse);
if (dd < d) d = dd;
}
}
slot += re->name_entry_size;
}
@ -418,14 +433,26 @@ for (;;)
ce = cs = (pcre_uchar *)PRIV(find_bracket)(startcode, utf, GET2(cc, 1));
if (cs == NULL) return -2;
do ce += GET(ce, 1); while (*ce == OP_ALT);
if (cc > cs && cc < ce)
if (cc > cs && cc < ce) /* Simple recursion */
{
d = 0;
had_recurse = TRUE;
}
else
{
d = find_minlength(re, cs, startcode, options, recurse_depth);
recurse_check *r = recurses;
for (r = recurses; r != NULL; r = r->prev) if (r->group == cs) break;
if (r != NULL) /* Mutual recursion */
{
d = 0;
had_recurse = TRUE;
}
else
{
this_recurse.prev = recurses;
this_recurse.group = cs;
d = find_minlength(re, cs, startcode, options, &this_recurse);
}
}
}
else d = 0;
@ -474,12 +501,21 @@ for (;;)
case OP_RECURSE:
cs = ce = (pcre_uchar *)startcode + GET(cc, 1);
do ce += GET(ce, 1); while (*ce == OP_ALT);
if ((cc > cs && cc < ce) || recurse_depth > 10)
if (cc > cs && cc < ce) /* Simple recursion */
had_recurse = TRUE;
else
{
branchlength += find_minlength(re, cs, startcode, options,
recurse_depth + 1);
recurse_check *r = recurses;
for (r = recurses; r != NULL; r = r->prev) if (r->group == cs) break;
if (r != NULL) /* Mutual recursion */
had_recurse = TRUE;
else
{
this_recurse.prev = recurses;
this_recurse.group = cs;
branchlength += find_minlength(re, cs, startcode, options,
&this_recurse);
}
}
cc += 1 + LINK_SIZE;
break;
@ -1503,7 +1539,7 @@ if ((re->options & PCRE_ANCHORED) == 0 &&
/* Find the minimum length of subject string. */
switch(min = find_minlength(re, code, code, re->options, 0))
switch(min = find_minlength(re, code, code, re->options, NULL))
{
case -2: *errorptr = "internal error: missing capturing bracket"; return NULL;
case -3: *errorptr = "internal error: opcode not recognized"; return NULL;

View File

@ -0,0 +1,126 @@
/*
* Stack-less Just-In-Time compiler
*
* Copyright 2009-2012 Zoltan Herczeg (hzmester@freemail.hu). All rights reserved.
*
* Redistribution and use in source and binary forms, with or without modification, are
* permitted provided that the following conditions are met:
*
* 1. Redistributions of source code must retain the above copyright notice, this list of
* conditions and the following disclaimer.
*
* 2. Redistributions in binary form must reproduce the above copyright notice, this list
* of conditions and the following disclaimer in the documentation and/or other materials
* provided with the distribution.
*
* THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDER(S) AND CONTRIBUTORS ``AS IS'' AND ANY
* EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES
* OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT
* SHALL THE COPYRIGHT HOLDER(S) OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT,
* INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED
* TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR
* BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
* CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN
* ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
*/
#ifndef _SLJIT_CONFIG_H_
#define _SLJIT_CONFIG_H_
/* --------------------------------------------------------------------- */
/* Custom defines */
/* --------------------------------------------------------------------- */
/* Put your custom defines here. This empty section will never change
which helps maintaining patches (with diff / patch utilities). */
/* --------------------------------------------------------------------- */
/* Architecture */
/* --------------------------------------------------------------------- */
/* Architecture selection. */
/* #define SLJIT_CONFIG_X86_32 1 */
/* #define SLJIT_CONFIG_X86_64 1 */
/* #define SLJIT_CONFIG_ARM_V5 1 */
/* #define SLJIT_CONFIG_ARM_V7 1 */
/* #define SLJIT_CONFIG_ARM_THUMB2 1 */
/* #define SLJIT_CONFIG_ARM_64 1 */
/* #define SLJIT_CONFIG_PPC_32 1 */
/* #define SLJIT_CONFIG_PPC_64 1 */
/* #define SLJIT_CONFIG_MIPS_32 1 */
/* #define SLJIT_CONFIG_MIPS_64 1 */
/* #define SLJIT_CONFIG_SPARC_32 1 */
/* #define SLJIT_CONFIG_TILEGX 1 */
/* #define SLJIT_CONFIG_AUTO 1 */
/* #define SLJIT_CONFIG_UNSUPPORTED 1 */
/* --------------------------------------------------------------------- */
/* Utilities */
/* --------------------------------------------------------------------- */
/* Useful for thread-safe compiling of global functions. */
#ifndef SLJIT_UTIL_GLOBAL_LOCK
/* Enabled by default */
#define SLJIT_UTIL_GLOBAL_LOCK 1
#endif
/* Implements a stack like data structure (by using mmap / VirtualAlloc). */
#ifndef SLJIT_UTIL_STACK
/* Enabled by default */
#define SLJIT_UTIL_STACK 1
#endif
/* Single threaded application. Does not require any locks. */
#ifndef SLJIT_SINGLE_THREADED
/* Disabled by default. */
#define SLJIT_SINGLE_THREADED 0
#endif
/* --------------------------------------------------------------------- */
/* Configuration */
/* --------------------------------------------------------------------- */
/* If SLJIT_STD_MACROS_DEFINED is not defined, the application should
define SLJIT_MALLOC, SLJIT_FREE, SLJIT_MEMMOVE, and NULL. */
#ifndef SLJIT_STD_MACROS_DEFINED
/* Disabled by default. */
#define SLJIT_STD_MACROS_DEFINED 0
#endif
/* Executable code allocation:
If SLJIT_EXECUTABLE_ALLOCATOR is not defined, the application should
define both SLJIT_MALLOC_EXEC and SLJIT_FREE_EXEC. */
#ifndef SLJIT_EXECUTABLE_ALLOCATOR
/* Enabled by default. */
#define SLJIT_EXECUTABLE_ALLOCATOR 1
#endif
/* Return with error when an invalid argument is passed. */
#ifndef SLJIT_ARGUMENT_CHECKS
/* Disabled by default */
#define SLJIT_ARGUMENT_CHECKS 0
#endif
/* Debug checks (assertions, etc.). */
#ifndef SLJIT_DEBUG
/* Enabled by default */
#define SLJIT_DEBUG 1
#endif
/* Verbose operations. */
#ifndef SLJIT_VERBOSE
/* Enabled by default */
#define SLJIT_VERBOSE 1
#endif
/*
SLJIT_IS_FPU_AVAILABLE
The availability of the FPU can be controlled by SLJIT_IS_FPU_AVAILABLE.
zero value - FPU is NOT present.
nonzero value - FPU is present.
*/
/* For further configurations, see the beginning of sljitConfigInternal.h */
#endif

View File

@ -0,0 +1,702 @@
/*
* Stack-less Just-In-Time compiler
*
* Copyright 2009-2012 Zoltan Herczeg (hzmester@freemail.hu). All rights reserved.
*
* Redistribution and use in source and binary forms, with or without modification, are
* permitted provided that the following conditions are met:
*
* 1. Redistributions of source code must retain the above copyright notice, this list of
* conditions and the following disclaimer.
*
* 2. Redistributions in binary form must reproduce the above copyright notice, this list
* of conditions and the following disclaimer in the documentation and/or other materials
* provided with the distribution.
*
* THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDER(S) AND CONTRIBUTORS ``AS IS'' AND ANY
* EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES
* OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT
* SHALL THE COPYRIGHT HOLDER(S) OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT,
* INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED
* TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR
* BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
* CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN
* ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
*/
#ifndef _SLJIT_CONFIG_INTERNAL_H_
#define _SLJIT_CONFIG_INTERNAL_H_
/*
SLJIT defines the following architecture dependent types and macros:
Types:
sljit_sb, sljit_ub : signed and unsigned 8 bit byte
sljit_sh, sljit_uh : signed and unsigned 16 bit half-word (short) type
sljit_si, sljit_ui : signed and unsigned 32 bit integer type
sljit_sw, sljit_uw : signed and unsigned machine word, enough to store a pointer
sljit_p : unsgined pointer value (usually the same as sljit_uw, but
some 64 bit ABIs may use 32 bit pointers)
sljit_s : single precision floating point value
sljit_d : double precision floating point value
Macros for feature detection (boolean):
SLJIT_32BIT_ARCHITECTURE : 32 bit architecture
SLJIT_64BIT_ARCHITECTURE : 64 bit architecture
SLJIT_LITTLE_ENDIAN : little endian architecture
SLJIT_BIG_ENDIAN : big endian architecture
SLJIT_UNALIGNED : allows unaligned memory accesses for non-fpu operations (only!)
SLJIT_INDIRECT_CALL : see SLJIT_FUNC_OFFSET() for more information
Constants:
SLJIT_NUMBER_OF_REGISTERS : number of available registers
SLJIT_NUMBER_OF_SCRATCH_REGISTERS : number of available scratch registers
SLJIT_NUMBER_OF_SAVED_REGISTERS : number of available saved registers
SLJIT_NUMBER_OF_FLOAT_REGISTERS : number of available floating point registers
SLJIT_NUMBER_OF_SCRATCH_FLOAT_REGISTERS : number of available floating point scratch registers
SLJIT_NUMBER_OF_SAVED_FLOAT_REGISTERS : number of available floating point saved registers
SLJIT_WORD_SHIFT : the shift required to apply when accessing a sljit_sw/sljit_uw array by index
SLJIT_DOUBLE_SHIFT : the shift required to apply when accessing
a double precision floating point array by index
SLJIT_SINGLE_SHIFT : the shift required to apply when accessing
a single precision floating point array by index
SLJIT_LOCALS_OFFSET : local space starting offset (SLJIT_SP + SLJIT_LOCALS_OFFSET)
SLJIT_RETURN_ADDRESS_OFFSET : a return instruction always adds this offset to the return address
Other macros:
SLJIT_CALL : C calling convention define for both calling JIT form C and C callbacks for JIT
SLJIT_W(number) : defining 64 bit constants on 64 bit architectures (compiler independent helper)
*/
/*****************/
/* Sanity check. */
/*****************/
#if !((defined SLJIT_CONFIG_X86_32 && SLJIT_CONFIG_X86_32) \
|| (defined SLJIT_CONFIG_X86_64 && SLJIT_CONFIG_X86_64) \
|| (defined SLJIT_CONFIG_ARM_V5 && SLJIT_CONFIG_ARM_V5) \
|| (defined SLJIT_CONFIG_ARM_V7 && SLJIT_CONFIG_ARM_V7) \
|| (defined SLJIT_CONFIG_ARM_THUMB2 && SLJIT_CONFIG_ARM_THUMB2) \
|| (defined SLJIT_CONFIG_ARM_64 && SLJIT_CONFIG_ARM_64) \
|| (defined SLJIT_CONFIG_PPC_32 && SLJIT_CONFIG_PPC_32) \
|| (defined SLJIT_CONFIG_PPC_64 && SLJIT_CONFIG_PPC_64) \
|| (defined SLJIT_CONFIG_MIPS_32 && SLJIT_CONFIG_MIPS_32) \
|| (defined SLJIT_CONFIG_MIPS_64 && SLJIT_CONFIG_MIPS_64) \
|| (defined SLJIT_CONFIG_SPARC_32 && SLJIT_CONFIG_SPARC_32) \
|| (defined SLJIT_CONFIG_TILEGX && SLJIT_CONFIG_TILEGX) \
|| (defined SLJIT_CONFIG_AUTO && SLJIT_CONFIG_AUTO) \
|| (defined SLJIT_CONFIG_UNSUPPORTED && SLJIT_CONFIG_UNSUPPORTED))
#error "An architecture must be selected"
#endif
#if (defined SLJIT_CONFIG_X86_32 && SLJIT_CONFIG_X86_32) \
+ (defined SLJIT_CONFIG_X86_64 && SLJIT_CONFIG_X86_64) \
+ (defined SLJIT_CONFIG_ARM_V5 && SLJIT_CONFIG_ARM_V5) \
+ (defined SLJIT_CONFIG_ARM_V7 && SLJIT_CONFIG_ARM_V7) \
+ (defined SLJIT_CONFIG_ARM_THUMB2 && SLJIT_CONFIG_ARM_THUMB2) \
+ (defined SLJIT_CONFIG_ARM_64 && SLJIT_CONFIG_ARM_64) \
+ (defined SLJIT_CONFIG_PPC_32 && SLJIT_CONFIG_PPC_32) \
+ (defined SLJIT_CONFIG_PPC_64 && SLJIT_CONFIG_PPC_64) \
+ (defined SLJIT_CONFIG_TILEGX && SLJIT_CONFIG_TILEGX) \
+ (defined SLJIT_CONFIG_MIPS_32 && SLJIT_CONFIG_MIPS_32) \
+ (defined SLJIT_CONFIG_MIPS_64 && SLJIT_CONFIG_MIPS_64) \
+ (defined SLJIT_CONFIG_SPARC_32 && SLJIT_CONFIG_SPARC_32) \
+ (defined SLJIT_CONFIG_AUTO && SLJIT_CONFIG_AUTO) \
+ (defined SLJIT_CONFIG_UNSUPPORTED && SLJIT_CONFIG_UNSUPPORTED) >= 2
#error "Multiple architectures are selected"
#endif
/********************************************************/
/* Automatic CPU detection (requires compiler support). */
/********************************************************/
#if (defined SLJIT_CONFIG_AUTO && SLJIT_CONFIG_AUTO)
#ifndef _WIN32
#if defined(__i386__) || defined(__i386)
#define SLJIT_CONFIG_X86_32 1
#elif defined(__x86_64__)
#define SLJIT_CONFIG_X86_64 1
#elif defined(__arm__) || defined(__ARM__)
#ifdef __thumb2__
#define SLJIT_CONFIG_ARM_THUMB2 1
#elif defined(__ARM_ARCH_7__) || defined(__ARM_ARCH_7A__) || defined(__ARM_ARCH_7R__)
#define SLJIT_CONFIG_ARM_V7 1
#else
#define SLJIT_CONFIG_ARM_V5 1
#endif
#elif defined (__aarch64__)
#define SLJIT_CONFIG_ARM_64 1
#elif defined(__ppc64__) || defined(__powerpc64__) || defined(_ARCH_PPC64) || (defined(_POWER) && defined(__64BIT__))
#define SLJIT_CONFIG_PPC_64 1
#elif defined(__ppc__) || defined(__powerpc__) || defined(_ARCH_PPC) || defined(_ARCH_PWR) || defined(_ARCH_PWR2) || defined(_POWER)
#define SLJIT_CONFIG_PPC_32 1
#elif defined(__mips__) && !defined(_LP64)
#define SLJIT_CONFIG_MIPS_32 1
#elif defined(__mips64)
#define SLJIT_CONFIG_MIPS_64 1
#elif defined(__sparc__) || defined(__sparc)
#define SLJIT_CONFIG_SPARC_32 1
#elif defined(__tilegx__)
#define SLJIT_CONFIG_TILEGX 1
#else
/* Unsupported architecture */
#define SLJIT_CONFIG_UNSUPPORTED 1
#endif
#else /* !_WIN32 */
#if defined(_M_X64) || defined(__x86_64__)
#define SLJIT_CONFIG_X86_64 1
#elif defined(_ARM_)
#define SLJIT_CONFIG_ARM_V5 1
#else
#define SLJIT_CONFIG_X86_32 1
#endif
#endif /* !WIN32 */
#endif /* SLJIT_CONFIG_AUTO */
#if (defined SLJIT_CONFIG_UNSUPPORTED && SLJIT_CONFIG_UNSUPPORTED)
#undef SLJIT_EXECUTABLE_ALLOCATOR
#endif
/******************************/
/* CPU family type detection. */
/******************************/
#if (defined SLJIT_CONFIG_ARM_V5 && SLJIT_CONFIG_ARM_V5) || (defined SLJIT_CONFIG_ARM_V7 && SLJIT_CONFIG_ARM_V7) \
|| (defined SLJIT_CONFIG_ARM_THUMB2 && SLJIT_CONFIG_ARM_THUMB2)
#define SLJIT_CONFIG_ARM_32 1
#endif
#if (defined SLJIT_CONFIG_X86_32 && SLJIT_CONFIG_X86_32) || (defined SLJIT_CONFIG_X86_64 && SLJIT_CONFIG_X86_64)
#define SLJIT_CONFIG_X86 1
#elif (defined SLJIT_CONFIG_ARM_32 && SLJIT_CONFIG_ARM_32) || (defined SLJIT_CONFIG_ARM_64 && SLJIT_CONFIG_ARM_64)
#define SLJIT_CONFIG_ARM 1
#elif (defined SLJIT_CONFIG_PPC_32 && SLJIT_CONFIG_PPC_32) || (defined SLJIT_CONFIG_PPC_64 && SLJIT_CONFIG_PPC_64)
#define SLJIT_CONFIG_PPC 1
#elif (defined SLJIT_CONFIG_MIPS_32 && SLJIT_CONFIG_MIPS_32) || (defined SLJIT_CONFIG_MIPS_64 && SLJIT_CONFIG_MIPS_64)
#define SLJIT_CONFIG_MIPS 1
#elif (defined SLJIT_CONFIG_SPARC_32 && SLJIT_CONFIG_SPARC_32) || (defined SLJIT_CONFIG_SPARC_64 && SLJIT_CONFIG_SPARC_64)
#define SLJIT_CONFIG_SPARC 1
#endif
/**********************************/
/* External function definitions. */
/**********************************/
#if !(defined SLJIT_STD_MACROS_DEFINED && SLJIT_STD_MACROS_DEFINED)
/* These libraries are needed for the macros below. */
#include <stdlib.h>
#include <string.h>
#endif /* SLJIT_STD_MACROS_DEFINED */
/* General macros:
Note: SLJIT is designed to be independent from them as possible.
In release mode (SLJIT_DEBUG is not defined) only the following
external functions are needed:
*/
#ifndef SLJIT_MALLOC
#define SLJIT_MALLOC(size, allocator_data) malloc(size)
#endif
#ifndef SLJIT_FREE
#define SLJIT_FREE(ptr, allocator_data) free(ptr)
#endif
#ifndef SLJIT_MEMMOVE
#define SLJIT_MEMMOVE(dest, src, len) memmove(dest, src, len)
#endif
#ifndef SLJIT_ZEROMEM
#define SLJIT_ZEROMEM(dest, len) memset(dest, 0, len)
#endif
/***************************/
/* Compiler helper macros. */
/***************************/
#if !defined(SLJIT_LIKELY) && !defined(SLJIT_UNLIKELY)
#if defined(__GNUC__) && (__GNUC__ >= 3)
#define SLJIT_LIKELY(x) __builtin_expect((x), 1)
#define SLJIT_UNLIKELY(x) __builtin_expect((x), 0)
#else
#define SLJIT_LIKELY(x) (x)
#define SLJIT_UNLIKELY(x) (x)
#endif
#endif /* !defined(SLJIT_LIKELY) && !defined(SLJIT_UNLIKELY) */
#ifndef SLJIT_INLINE
/* Inline functions. Some old compilers do not support them. */
#if defined(__SUNPRO_C) && __SUNPRO_C <= 0x510
#define SLJIT_INLINE
#else
#define SLJIT_INLINE __inline
#endif
#endif /* !SLJIT_INLINE */
#ifndef SLJIT_NOINLINE
/* Not inline functions. */
#if defined(__GNUC__)
#define SLJIT_NOINLINE __attribute__ ((noinline))
#else
#define SLJIT_NOINLINE
#endif
#endif /* !SLJIT_INLINE */
#ifndef SLJIT_CONST
/* Const variables. */
#define SLJIT_CONST const
#endif
#ifndef SLJIT_UNUSED_ARG
/* Unused arguments. */
#define SLJIT_UNUSED_ARG(arg) (void)arg
#endif
/*********************************/
/* Type of public API functions. */
/*********************************/
#if (defined SLJIT_CONFIG_STATIC && SLJIT_CONFIG_STATIC)
/* Static ABI functions. For all-in-one programs. */
#if defined(__GNUC__)
/* Disable unused warnings in gcc. */
#define SLJIT_API_FUNC_ATTRIBUTE static __attribute__((unused))
#else
#define SLJIT_API_FUNC_ATTRIBUTE static
#endif
#else
#define SLJIT_API_FUNC_ATTRIBUTE
#endif /* (defined SLJIT_CONFIG_STATIC && SLJIT_CONFIG_STATIC) */
/****************************/
/* Instruction cache flush. */
/****************************/
#ifndef SLJIT_CACHE_FLUSH
#if (defined SLJIT_CONFIG_X86 && SLJIT_CONFIG_X86)
/* Not required to implement on archs with unified caches. */
#define SLJIT_CACHE_FLUSH(from, to)
#elif defined __APPLE__
/* Supported by all macs since Mac OS 10.5.
However, it does not work on non-jailbroken iOS devices,
although the compilation is successful. */
#define SLJIT_CACHE_FLUSH(from, to) \
sys_icache_invalidate((char*)(from), (char*)(to) - (char*)(from))
#elif defined __ANDROID__
/* Android lacks __clear_cache; instead, cacheflush should be used. */
#define SLJIT_CACHE_FLUSH(from, to) \
cacheflush((long)(from), (long)(to), 0)
#elif (defined SLJIT_CONFIG_PPC && SLJIT_CONFIG_PPC)
/* The __clear_cache() implementation of GCC is a dummy function on PowerPC. */
#define SLJIT_CACHE_FLUSH(from, to) \
ppc_cache_flush((from), (to))
#elif (defined SLJIT_CONFIG_SPARC_32 && SLJIT_CONFIG_SPARC_32)
/* The __clear_cache() implementation of GCC is a dummy function on Sparc. */
#define SLJIT_CACHE_FLUSH(from, to) \
sparc_cache_flush((from), (to))
#else
/* Calls __ARM_NR_cacheflush on ARM-Linux. */
#define SLJIT_CACHE_FLUSH(from, to) \
__clear_cache((char*)(from), (char*)(to))
#endif
#endif /* !SLJIT_CACHE_FLUSH */
/******************************************************/
/* Byte/half/int/word/single/double type definitions. */
/******************************************************/
/* 8 bit byte type. */
typedef unsigned char sljit_ub;
typedef signed char sljit_sb;
/* 16 bit half-word type. */
typedef unsigned short int sljit_uh;
typedef signed short int sljit_sh;
/* 32 bit integer type. */
typedef unsigned int sljit_ui;
typedef signed int sljit_si;
/* Machine word type. Enough for storing a pointer.
32 bit for 32 bit machines.
64 bit for 64 bit machines. */
#if (defined SLJIT_CONFIG_UNSUPPORTED && SLJIT_CONFIG_UNSUPPORTED)
/* Just to have something. */
#define SLJIT_WORD_SHIFT 0
typedef unsigned long int sljit_uw;
typedef long int sljit_sw;
#elif !(defined SLJIT_CONFIG_X86_64 && SLJIT_CONFIG_X86_64) \
&& !(defined SLJIT_CONFIG_ARM_64 && SLJIT_CONFIG_ARM_64) \
&& !(defined SLJIT_CONFIG_PPC_64 && SLJIT_CONFIG_PPC_64) \
&& !(defined SLJIT_CONFIG_MIPS_64 && SLJIT_CONFIG_MIPS_64) \
&& !(defined SLJIT_CONFIG_TILEGX && SLJIT_CONFIG_TILEGX)
#define SLJIT_32BIT_ARCHITECTURE 1
#define SLJIT_WORD_SHIFT 2
typedef unsigned int sljit_uw;
typedef int sljit_sw;
#else
#define SLJIT_64BIT_ARCHITECTURE 1
#define SLJIT_WORD_SHIFT 3
#ifdef _WIN32
typedef unsigned __int64 sljit_uw;
typedef __int64 sljit_sw;
#else
typedef unsigned long int sljit_uw;
typedef long int sljit_sw;
#endif
#endif
typedef sljit_uw sljit_p;
/* Floating point types. */
typedef float sljit_s;
typedef double sljit_d;
/* Shift for pointer sized data. */
#define SLJIT_POINTER_SHIFT SLJIT_WORD_SHIFT
/* Shift for double precision sized data. */
#define SLJIT_DOUBLE_SHIFT 3
#define SLJIT_SINGLE_SHIFT 2
#ifndef SLJIT_W
/* Defining long constants. */
#if (defined SLJIT_64BIT_ARCHITECTURE && SLJIT_64BIT_ARCHITECTURE)
#define SLJIT_W(w) (w##ll)
#else
#define SLJIT_W(w) (w)
#endif
#endif /* !SLJIT_W */
/*************************/
/* Endianness detection. */
/*************************/
#if !defined(SLJIT_BIG_ENDIAN) && !defined(SLJIT_LITTLE_ENDIAN)
/* These macros are mostly useful for the applications. */
#if (defined SLJIT_CONFIG_PPC_32 && SLJIT_CONFIG_PPC_32) \
|| (defined SLJIT_CONFIG_PPC_64 && SLJIT_CONFIG_PPC_64)
#ifdef __LITTLE_ENDIAN__
#define SLJIT_LITTLE_ENDIAN 1
#else
#define SLJIT_BIG_ENDIAN 1
#endif
#elif (defined SLJIT_CONFIG_MIPS_32 && SLJIT_CONFIG_MIPS_32) \
|| (defined SLJIT_CONFIG_MIPS_64 && SLJIT_CONFIG_MIPS_64)
#ifdef __MIPSEL__
#define SLJIT_LITTLE_ENDIAN 1
#else
#define SLJIT_BIG_ENDIAN 1
#endif
#elif (defined SLJIT_CONFIG_SPARC_32 && SLJIT_CONFIG_SPARC_32)
#define SLJIT_BIG_ENDIAN 1
#else
#define SLJIT_LITTLE_ENDIAN 1
#endif
#endif /* !defined(SLJIT_BIG_ENDIAN) && !defined(SLJIT_LITTLE_ENDIAN) */
/* Sanity check. */
#if (defined SLJIT_BIG_ENDIAN && SLJIT_BIG_ENDIAN) && (defined SLJIT_LITTLE_ENDIAN && SLJIT_LITTLE_ENDIAN)
#error "Exactly one endianness must be selected"
#endif
#if !(defined SLJIT_BIG_ENDIAN && SLJIT_BIG_ENDIAN) && !(defined SLJIT_LITTLE_ENDIAN && SLJIT_LITTLE_ENDIAN)
#error "Exactly one endianness must be selected"
#endif
#ifndef SLJIT_UNALIGNED
#if (defined SLJIT_CONFIG_X86_32 && SLJIT_CONFIG_X86_32) \
|| (defined SLJIT_CONFIG_X86_64 && SLJIT_CONFIG_X86_64) \
|| (defined SLJIT_CONFIG_ARM_V7 && SLJIT_CONFIG_ARM_V7) \
|| (defined SLJIT_CONFIG_ARM_THUMB2 && SLJIT_CONFIG_ARM_THUMB2) \
|| (defined SLJIT_CONFIG_ARM_64 && SLJIT_CONFIG_ARM_64) \
|| (defined SLJIT_CONFIG_PPC_32 && SLJIT_CONFIG_PPC_32) \
|| (defined SLJIT_CONFIG_PPC_64 && SLJIT_CONFIG_PPC_64)
#define SLJIT_UNALIGNED 1
#endif
#endif /* !SLJIT_UNALIGNED */
#if (defined SLJIT_CONFIG_X86_32 && SLJIT_CONFIG_X86_32)
/* Auto detect SSE2 support using CPUID.
On 64 bit x86 cpus, sse2 must be present. */
#define SLJIT_DETECT_SSE2 1
#endif
/*****************************************************************************************/
/* Calling convention of functions generated by SLJIT or called from the generated code. */
/*****************************************************************************************/
#ifndef SLJIT_CALL
#if (defined SLJIT_CONFIG_X86_32 && SLJIT_CONFIG_X86_32)
#if defined(__GNUC__) && !defined(__APPLE__)
#define SLJIT_CALL __attribute__ ((fastcall))
#define SLJIT_X86_32_FASTCALL 1
#elif defined(_MSC_VER)
#define SLJIT_CALL __fastcall
#define SLJIT_X86_32_FASTCALL 1
#elif defined(__BORLANDC__)
#define SLJIT_CALL __msfastcall
#define SLJIT_X86_32_FASTCALL 1
#else /* Unknown compiler. */
/* The cdecl attribute is the default. */
#define SLJIT_CALL
#endif
#else /* Non x86-32 architectures. */
#define SLJIT_CALL
#endif /* SLJIT_CONFIG_X86_32 */
#endif /* !SLJIT_CALL */
#ifndef SLJIT_INDIRECT_CALL
#if ((defined SLJIT_CONFIG_PPC_64 && SLJIT_CONFIG_PPC_64) && (defined SLJIT_BIG_ENDIAN && SLJIT_BIG_ENDIAN)) \
|| ((defined SLJIT_CONFIG_PPC_32 && SLJIT_CONFIG_PPC_32) && defined _AIX)
/* It seems certain ppc compilers use an indirect addressing for functions
which makes things complicated. */
#define SLJIT_INDIRECT_CALL 1
#endif
#endif /* SLJIT_INDIRECT_CALL */
/* The offset which needs to be substracted from the return address to
determine the next executed instruction after return. */
#ifndef SLJIT_RETURN_ADDRESS_OFFSET
#if (defined SLJIT_CONFIG_SPARC_32 && SLJIT_CONFIG_SPARC_32)
#define SLJIT_RETURN_ADDRESS_OFFSET 8
#else
#define SLJIT_RETURN_ADDRESS_OFFSET 0
#endif
#endif /* SLJIT_RETURN_ADDRESS_OFFSET */
/***************************************************/
/* Functions of the built-in executable allocator. */
/***************************************************/
#if (defined SLJIT_EXECUTABLE_ALLOCATOR && SLJIT_EXECUTABLE_ALLOCATOR)
SLJIT_API_FUNC_ATTRIBUTE void* sljit_malloc_exec(sljit_uw size);
SLJIT_API_FUNC_ATTRIBUTE void sljit_free_exec(void* ptr);
SLJIT_API_FUNC_ATTRIBUTE void sljit_free_unused_memory_exec(void);
#define SLJIT_MALLOC_EXEC(size) sljit_malloc_exec(size)
#define SLJIT_FREE_EXEC(ptr) sljit_free_exec(ptr)
#endif
/**********************************************/
/* Registers and locals offset determination. */
/**********************************************/
#if (defined SLJIT_CONFIG_X86_32 && SLJIT_CONFIG_X86_32)
#define SLJIT_NUMBER_OF_REGISTERS 10
#define SLJIT_NUMBER_OF_SAVED_REGISTERS 7
#if (defined SLJIT_X86_32_FASTCALL && SLJIT_X86_32_FASTCALL)
#define SLJIT_LOCALS_OFFSET_BASE ((2 + 4) * sizeof(sljit_sw))
#else
/* Maximum 3 arguments are passed on the stack, +1 for double alignment. */
#define SLJIT_LOCALS_OFFSET_BASE ((3 + 1 + 4) * sizeof(sljit_sw))
#endif /* SLJIT_X86_32_FASTCALL */
#elif (defined SLJIT_CONFIG_X86_64 && SLJIT_CONFIG_X86_64)
#ifndef _WIN64
#define SLJIT_NUMBER_OF_REGISTERS 12
#define SLJIT_NUMBER_OF_SAVED_REGISTERS 6
#define SLJIT_LOCALS_OFFSET_BASE (sizeof(sljit_sw))
#else
#define SLJIT_NUMBER_OF_REGISTERS 12
#define SLJIT_NUMBER_OF_SAVED_REGISTERS 8
#define SLJIT_LOCALS_OFFSET_BASE ((4 + 2) * sizeof(sljit_sw))
#endif /* _WIN64 */
#elif (defined SLJIT_CONFIG_ARM_V5 && SLJIT_CONFIG_ARM_V5) || (defined SLJIT_CONFIG_ARM_V7 && SLJIT_CONFIG_ARM_V7)
#define SLJIT_NUMBER_OF_REGISTERS 11
#define SLJIT_NUMBER_OF_SAVED_REGISTERS 8
#define SLJIT_LOCALS_OFFSET_BASE 0
#elif (defined SLJIT_CONFIG_ARM_THUMB2 && SLJIT_CONFIG_ARM_THUMB2)
#define SLJIT_NUMBER_OF_REGISTERS 11
#define SLJIT_NUMBER_OF_SAVED_REGISTERS 7
#define SLJIT_LOCALS_OFFSET_BASE 0
#elif (defined SLJIT_CONFIG_ARM_64 && SLJIT_CONFIG_ARM_64)
#define SLJIT_NUMBER_OF_REGISTERS 25
#define SLJIT_NUMBER_OF_SAVED_REGISTERS 10
#define SLJIT_LOCALS_OFFSET_BASE (2 * sizeof(sljit_sw))
#elif (defined SLJIT_CONFIG_PPC && SLJIT_CONFIG_PPC)
#define SLJIT_NUMBER_OF_REGISTERS 22
#define SLJIT_NUMBER_OF_SAVED_REGISTERS 17
#if (defined SLJIT_CONFIG_PPC_64 && SLJIT_CONFIG_PPC_64) || (defined _AIX)
#define SLJIT_LOCALS_OFFSET_BASE ((6 + 8) * sizeof(sljit_sw))
#elif (defined SLJIT_CONFIG_PPC_32 && SLJIT_CONFIG_PPC_32)
/* Add +1 for double alignment. */
#define SLJIT_LOCALS_OFFSET_BASE ((3 + 1) * sizeof(sljit_sw))
#else
#define SLJIT_LOCALS_OFFSET_BASE (3 * sizeof(sljit_sw))
#endif /* SLJIT_CONFIG_PPC_64 || _AIX */
#elif (defined SLJIT_CONFIG_MIPS && SLJIT_CONFIG_MIPS)
#define SLJIT_NUMBER_OF_REGISTERS 17
#define SLJIT_NUMBER_OF_SAVED_REGISTERS 8
#if (defined SLJIT_CONFIG_MIPS_32 && SLJIT_CONFIG_MIPS_32)
#define SLJIT_LOCALS_OFFSET_BASE (4 * sizeof(sljit_sw))
#else
#define SLJIT_LOCALS_OFFSET_BASE 0
#endif
#elif (defined SLJIT_CONFIG_SPARC && SLJIT_CONFIG_SPARC)
#define SLJIT_NUMBER_OF_REGISTERS 18
#define SLJIT_NUMBER_OF_SAVED_REGISTERS 14
#if (defined SLJIT_CONFIG_SPARC_32 && SLJIT_CONFIG_SPARC_32)
/* Add +1 for double alignment. */
#define SLJIT_LOCALS_OFFSET_BASE ((23 + 1) * sizeof(sljit_sw))
#endif
#elif (defined SLJIT_CONFIG_UNSUPPORTED && SLJIT_CONFIG_UNSUPPORTED)
#define SLJIT_NUMBER_OF_REGISTERS 0
#define SLJIT_NUMBER_OF_SAVED_REGISTERS 0
#define SLJIT_LOCALS_OFFSET_BASE 0
#endif
#define SLJIT_LOCALS_OFFSET (SLJIT_LOCALS_OFFSET_BASE)
#define SLJIT_NUMBER_OF_SCRATCH_REGISTERS \
(SLJIT_NUMBER_OF_REGISTERS - SLJIT_NUMBER_OF_SAVED_REGISTERS)
#define SLJIT_NUMBER_OF_FLOAT_REGISTERS 6
#if (defined SLJIT_CONFIG_X86_64 && SLJIT_CONFIG_X86_64) && (defined _WIN64)
#define SLJIT_NUMBER_OF_SAVED_FLOAT_REGISTERS 1
#else
#define SLJIT_NUMBER_OF_SAVED_FLOAT_REGISTERS 0
#endif
#define SLJIT_NUMBER_OF_SCRATCH_FLOAT_REGISTERS \
(SLJIT_NUMBER_OF_FLOAT_REGISTERS - SLJIT_NUMBER_OF_SAVED_FLOAT_REGISTERS)
/*************************************/
/* Debug and verbose related macros. */
/*************************************/
#if (defined SLJIT_VERBOSE && SLJIT_VERBOSE)
#include <stdio.h>
#endif
#if (defined SLJIT_DEBUG && SLJIT_DEBUG)
#if !defined(SLJIT_ASSERT) || !defined(SLJIT_ASSERT_STOP)
/* SLJIT_HALT_PROCESS must halt the process. */
#ifndef SLJIT_HALT_PROCESS
#include <stdlib.h>
#define SLJIT_HALT_PROCESS() \
abort();
#endif /* !SLJIT_HALT_PROCESS */
#include <stdio.h>
#endif /* !SLJIT_ASSERT || !SLJIT_ASSERT_STOP */
/* Feel free to redefine these two macros. */
#ifndef SLJIT_ASSERT
#define SLJIT_ASSERT(x) \
do { \
if (SLJIT_UNLIKELY(!(x))) { \
printf("Assertion failed at " __FILE__ ":%d\n", __LINE__); \
SLJIT_HALT_PROCESS(); \
} \
} while (0)
#endif /* !SLJIT_ASSERT */
#ifndef SLJIT_ASSERT_STOP
#define SLJIT_ASSERT_STOP() \
do { \
printf("Should never been reached " __FILE__ ":%d\n", __LINE__); \
SLJIT_HALT_PROCESS(); \
} while (0)
#endif /* !SLJIT_ASSERT_STOP */
#else /* (defined SLJIT_DEBUG && SLJIT_DEBUG) */
/* Forcing empty, but valid statements. */
#undef SLJIT_ASSERT
#undef SLJIT_ASSERT_STOP
#define SLJIT_ASSERT(x) \
do { } while (0)
#define SLJIT_ASSERT_STOP() \
do { } while (0)
#endif /* (defined SLJIT_DEBUG && SLJIT_DEBUG) */
#ifndef SLJIT_COMPILE_ASSERT
/* Should be improved eventually. */
#define SLJIT_COMPILE_ASSERT(x, description) \
SLJIT_ASSERT(x)
#endif /* !SLJIT_COMPILE_ASSERT */
#endif

View File

@ -0,0 +1,312 @@
/*
* Stack-less Just-In-Time compiler
*
* Copyright 2009-2012 Zoltan Herczeg (hzmester@freemail.hu). All rights reserved.
*
* Redistribution and use in source and binary forms, with or without modification, are
* permitted provided that the following conditions are met:
*
* 1. Redistributions of source code must retain the above copyright notice, this list of
* conditions and the following disclaimer.
*
* 2. Redistributions in binary form must reproduce the above copyright notice, this list
* of conditions and the following disclaimer in the documentation and/or other materials
* provided with the distribution.
*
* THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDER(S) AND CONTRIBUTORS ``AS IS'' AND ANY
* EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES
* OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT
* SHALL THE COPYRIGHT HOLDER(S) OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT,
* INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED
* TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR
* BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
* CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN
* ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
*/
/*
This file contains a simple executable memory allocator
It is assumed, that executable code blocks are usually medium (or sometimes
large) memory blocks, and the allocator is not too frequently called (less
optimized than other allocators). Thus, using it as a generic allocator is
not suggested.
How does it work:
Memory is allocated in continuous memory areas called chunks by alloc_chunk()
Chunk format:
[ block ][ block ] ... [ block ][ block terminator ]
All blocks and the block terminator is started with block_header. The block
header contains the size of the previous and the next block. These sizes
can also contain special values.
Block size:
0 - The block is a free_block, with a different size member.
1 - The block is a block terminator.
n - The block is used at the moment, and the value contains its size.
Previous block size:
0 - This is the first block of the memory chunk.
n - The size of the previous block.
Using these size values we can go forward or backward on the block chain.
The unused blocks are stored in a chain list pointed by free_blocks. This
list is useful if we need to find a suitable memory area when the allocator
is called.
When a block is freed, the new free block is connected to its adjacent free
blocks if possible.
[ free block ][ used block ][ free block ]
and "used block" is freed, the three blocks are connected together:
[ one big free block ]
*/
/* --------------------------------------------------------------------- */
/* System (OS) functions */
/* --------------------------------------------------------------------- */
/* 64 KByte. */
#define CHUNK_SIZE 0x10000
/*
alloc_chunk / free_chunk :
* allocate executable system memory chunks
* the size is always divisible by CHUNK_SIZE
allocator_grab_lock / allocator_release_lock :
* make the allocator thread safe
* can be empty if the OS (or the application) does not support threading
* only the allocator requires this lock, sljit is fully thread safe
as it only uses local variables
*/
#ifdef _WIN32
static SLJIT_INLINE void* alloc_chunk(sljit_uw size)
{
return VirtualAlloc(NULL, size, MEM_COMMIT | MEM_RESERVE, PAGE_EXECUTE_READWRITE);
}
static SLJIT_INLINE void free_chunk(void* chunk, sljit_uw size)
{
SLJIT_UNUSED_ARG(size);
VirtualFree(chunk, 0, MEM_RELEASE);
}
#else
static SLJIT_INLINE void* alloc_chunk(sljit_uw size)
{
void* retval;
#ifdef MAP_ANON
retval = mmap(NULL, size, PROT_READ | PROT_WRITE | PROT_EXEC, MAP_PRIVATE | MAP_ANON, -1, 0);
#else
if (dev_zero < 0) {
if (open_dev_zero())
return NULL;
}
retval = mmap(NULL, size, PROT_READ | PROT_WRITE | PROT_EXEC, MAP_PRIVATE, dev_zero, 0);
#endif
return (retval != MAP_FAILED) ? retval : NULL;
}
static SLJIT_INLINE void free_chunk(void* chunk, sljit_uw size)
{
munmap(chunk, size);
}
#endif
/* --------------------------------------------------------------------- */
/* Common functions */
/* --------------------------------------------------------------------- */
#define CHUNK_MASK (~(CHUNK_SIZE - 1))
struct block_header {
sljit_uw size;
sljit_uw prev_size;
};
struct free_block {
struct block_header header;
struct free_block *next;
struct free_block *prev;
sljit_uw size;
};
#define AS_BLOCK_HEADER(base, offset) \
((struct block_header*)(((sljit_ub*)base) + offset))
#define AS_FREE_BLOCK(base, offset) \
((struct free_block*)(((sljit_ub*)base) + offset))
#define MEM_START(base) ((void*)(((sljit_ub*)base) + sizeof(struct block_header)))
#define ALIGN_SIZE(size) (((size) + sizeof(struct block_header) + 7) & ~7)
static struct free_block* free_blocks;
static sljit_uw allocated_size;
static sljit_uw total_size;
static SLJIT_INLINE void sljit_insert_free_block(struct free_block *free_block, sljit_uw size)
{
free_block->header.size = 0;
free_block->size = size;
free_block->next = free_blocks;
free_block->prev = 0;
if (free_blocks)
free_blocks->prev = free_block;
free_blocks = free_block;
}
static SLJIT_INLINE void sljit_remove_free_block(struct free_block *free_block)
{
if (free_block->next)
free_block->next->prev = free_block->prev;
if (free_block->prev)
free_block->prev->next = free_block->next;
else {
SLJIT_ASSERT(free_blocks == free_block);
free_blocks = free_block->next;
}
}
SLJIT_API_FUNC_ATTRIBUTE void* sljit_malloc_exec(sljit_uw size)
{
struct block_header *header;
struct block_header *next_header;
struct free_block *free_block;
sljit_uw chunk_size;
allocator_grab_lock();
if (size < sizeof(struct free_block))
size = sizeof(struct free_block);
size = ALIGN_SIZE(size);
free_block = free_blocks;
while (free_block) {
if (free_block->size >= size) {
chunk_size = free_block->size;
if (chunk_size > size + 64) {
/* We just cut a block from the end of the free block. */
chunk_size -= size;
free_block->size = chunk_size;
header = AS_BLOCK_HEADER(free_block, chunk_size);
header->prev_size = chunk_size;
AS_BLOCK_HEADER(header, size)->prev_size = size;
}
else {
sljit_remove_free_block(free_block);
header = (struct block_header*)free_block;
size = chunk_size;
}
allocated_size += size;
header->size = size;
allocator_release_lock();
return MEM_START(header);
}
free_block = free_block->next;
}
chunk_size = (size + sizeof(struct block_header) + CHUNK_SIZE - 1) & CHUNK_MASK;
header = (struct block_header*)alloc_chunk(chunk_size);
if (!header) {
allocator_release_lock();
return NULL;
}
chunk_size -= sizeof(struct block_header);
total_size += chunk_size;
header->prev_size = 0;
if (chunk_size > size + 64) {
/* Cut the allocated space into a free and a used block. */
allocated_size += size;
header->size = size;
chunk_size -= size;
free_block = AS_FREE_BLOCK(header, size);
free_block->header.prev_size = size;
sljit_insert_free_block(free_block, chunk_size);
next_header = AS_BLOCK_HEADER(free_block, chunk_size);
}
else {
/* All space belongs to this allocation. */
allocated_size += chunk_size;
header->size = chunk_size;
next_header = AS_BLOCK_HEADER(header, chunk_size);
}
next_header->size = 1;
next_header->prev_size = chunk_size;
allocator_release_lock();
return MEM_START(header);
}
SLJIT_API_FUNC_ATTRIBUTE void sljit_free_exec(void* ptr)
{
struct block_header *header;
struct free_block* free_block;
allocator_grab_lock();
header = AS_BLOCK_HEADER(ptr, -(sljit_sw)sizeof(struct block_header));
allocated_size -= header->size;
/* Connecting free blocks together if possible. */
/* If header->prev_size == 0, free_block will equal to header.
In this case, free_block->header.size will be > 0. */
free_block = AS_FREE_BLOCK(header, -(sljit_sw)header->prev_size);
if (SLJIT_UNLIKELY(!free_block->header.size)) {
free_block->size += header->size;
header = AS_BLOCK_HEADER(free_block, free_block->size);
header->prev_size = free_block->size;
}
else {
free_block = (struct free_block*)header;
sljit_insert_free_block(free_block, header->size);
}
header = AS_BLOCK_HEADER(free_block, free_block->size);
if (SLJIT_UNLIKELY(!header->size)) {
free_block->size += ((struct free_block*)header)->size;
sljit_remove_free_block((struct free_block*)header);
header = AS_BLOCK_HEADER(free_block, free_block->size);
header->prev_size = free_block->size;
}
/* The whole chunk is free. */
if (SLJIT_UNLIKELY(!free_block->header.prev_size && header->size == 1)) {
/* If this block is freed, we still have (allocated_size / 2) free space. */
if (total_size - free_block->size > (allocated_size * 3 / 2)) {
total_size -= free_block->size;
sljit_remove_free_block(free_block);
free_chunk(free_block, free_block->size + sizeof(struct block_header));
}
}
allocator_release_lock();
}
SLJIT_API_FUNC_ATTRIBUTE void sljit_free_unused_memory_exec(void)
{
struct free_block* free_block;
struct free_block* next_free_block;
allocator_grab_lock();
free_block = free_blocks;
while (free_block) {
next_free_block = free_block->next;
if (!free_block->header.prev_size &&
AS_BLOCK_HEADER(free_block, free_block->size)->size == 1) {
total_size -= free_block->size;
sljit_remove_free_block(free_block);
free_chunk(free_block, free_block->size + sizeof(struct block_header));
}
free_block = next_free_block;
}
SLJIT_ASSERT((total_size && free_blocks) || (!total_size && !free_blocks));
allocator_release_lock();
}

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

View File

@ -0,0 +1,366 @@
/*
* Stack-less Just-In-Time compiler
*
* Copyright 2009-2012 Zoltan Herczeg (hzmester@freemail.hu). All rights reserved.
*
* Redistribution and use in source and binary forms, with or without modification, are
* permitted provided that the following conditions are met:
*
* 1. Redistributions of source code must retain the above copyright notice, this list of
* conditions and the following disclaimer.
*
* 2. Redistributions in binary form must reproduce the above copyright notice, this list
* of conditions and the following disclaimer in the documentation and/or other materials
* provided with the distribution.
*
* THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDER(S) AND CONTRIBUTORS ``AS IS'' AND ANY
* EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES
* OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT
* SHALL THE COPYRIGHT HOLDER(S) OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT,
* INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED
* TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR
* BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
* CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN
* ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
*/
/* mips 32-bit arch dependent functions. */
static sljit_si load_immediate(struct sljit_compiler *compiler, sljit_si dst_ar, sljit_sw imm)
{
if (!(imm & ~0xffff))
return push_inst(compiler, ORI | SA(0) | TA(dst_ar) | IMM(imm), dst_ar);
if (imm < 0 && imm >= SIMM_MIN)
return push_inst(compiler, ADDIU | SA(0) | TA(dst_ar) | IMM(imm), dst_ar);
FAIL_IF(push_inst(compiler, LUI | TA(dst_ar) | IMM(imm >> 16), dst_ar));
return (imm & 0xffff) ? push_inst(compiler, ORI | SA(dst_ar) | TA(dst_ar) | IMM(imm), dst_ar) : SLJIT_SUCCESS;
}
#define EMIT_LOGICAL(op_imm, op_norm) \
if (flags & SRC2_IMM) { \
if (op & SLJIT_SET_E) \
FAIL_IF(push_inst(compiler, op_imm | S(src1) | TA(EQUAL_FLAG) | IMM(src2), EQUAL_FLAG)); \
if (CHECK_FLAGS(SLJIT_SET_E)) \
FAIL_IF(push_inst(compiler, op_imm | S(src1) | T(dst) | IMM(src2), DR(dst))); \
} \
else { \
if (op & SLJIT_SET_E) \
FAIL_IF(push_inst(compiler, op_norm | S(src1) | T(src2) | DA(EQUAL_FLAG), EQUAL_FLAG)); \
if (CHECK_FLAGS(SLJIT_SET_E)) \
FAIL_IF(push_inst(compiler, op_norm | S(src1) | T(src2) | D(dst), DR(dst))); \
}
#define EMIT_SHIFT(op_imm, op_v) \
if (flags & SRC2_IMM) { \
if (op & SLJIT_SET_E) \
FAIL_IF(push_inst(compiler, op_imm | T(src1) | DA(EQUAL_FLAG) | SH_IMM(src2), EQUAL_FLAG)); \
if (CHECK_FLAGS(SLJIT_SET_E)) \
FAIL_IF(push_inst(compiler, op_imm | T(src1) | D(dst) | SH_IMM(src2), DR(dst))); \
} \
else { \
if (op & SLJIT_SET_E) \
FAIL_IF(push_inst(compiler, op_v | S(src2) | T(src1) | DA(EQUAL_FLAG), EQUAL_FLAG)); \
if (CHECK_FLAGS(SLJIT_SET_E)) \
FAIL_IF(push_inst(compiler, op_v | S(src2) | T(src1) | D(dst), DR(dst))); \
}
static SLJIT_INLINE sljit_si emit_single_op(struct sljit_compiler *compiler, sljit_si op, sljit_si flags,
sljit_si dst, sljit_si src1, sljit_sw src2)
{
switch (GET_OPCODE(op)) {
case SLJIT_MOV:
case SLJIT_MOV_UI:
case SLJIT_MOV_SI:
case SLJIT_MOV_P:
SLJIT_ASSERT(src1 == TMP_REG1 && !(flags & SRC2_IMM));
if (dst != src2)
return push_inst(compiler, ADDU | S(src2) | TA(0) | D(dst), DR(dst));
return SLJIT_SUCCESS;
case SLJIT_MOV_UB:
case SLJIT_MOV_SB:
SLJIT_ASSERT(src1 == TMP_REG1 && !(flags & SRC2_IMM));
if ((flags & (REG_DEST | REG2_SOURCE)) == (REG_DEST | REG2_SOURCE)) {
if (op == SLJIT_MOV_SB) {
#if (defined SLJIT_MIPS_R1 && SLJIT_MIPS_R1)
return push_inst(compiler, SEB | T(src2) | D(dst), DR(dst));
#else
FAIL_IF(push_inst(compiler, SLL | T(src2) | D(dst) | SH_IMM(24), DR(dst)));
return push_inst(compiler, SRA | T(dst) | D(dst) | SH_IMM(24), DR(dst));
#endif
}
return push_inst(compiler, ANDI | S(src2) | T(dst) | IMM(0xff), DR(dst));
}
else if (dst != src2)
SLJIT_ASSERT_STOP();
return SLJIT_SUCCESS;
case SLJIT_MOV_UH:
case SLJIT_MOV_SH:
SLJIT_ASSERT(src1 == TMP_REG1 && !(flags & SRC2_IMM));
if ((flags & (REG_DEST | REG2_SOURCE)) == (REG_DEST | REG2_SOURCE)) {
if (op == SLJIT_MOV_SH) {
#if (defined SLJIT_MIPS_R1 && SLJIT_MIPS_R1)
return push_inst(compiler, SEH | T(src2) | D(dst), DR(dst));
#else
FAIL_IF(push_inst(compiler, SLL | T(src2) | D(dst) | SH_IMM(16), DR(dst)));
return push_inst(compiler, SRA | T(dst) | D(dst) | SH_IMM(16), DR(dst));
#endif
}
return push_inst(compiler, ANDI | S(src2) | T(dst) | IMM(0xffff), DR(dst));
}
else if (dst != src2)
SLJIT_ASSERT_STOP();
return SLJIT_SUCCESS;
case SLJIT_NOT:
SLJIT_ASSERT(src1 == TMP_REG1 && !(flags & SRC2_IMM));
if (op & SLJIT_SET_E)
FAIL_IF(push_inst(compiler, NOR | S(src2) | T(src2) | DA(EQUAL_FLAG), EQUAL_FLAG));
if (CHECK_FLAGS(SLJIT_SET_E))
FAIL_IF(push_inst(compiler, NOR | S(src2) | T(src2) | D(dst), DR(dst)));
return SLJIT_SUCCESS;
case SLJIT_CLZ:
SLJIT_ASSERT(src1 == TMP_REG1 && !(flags & SRC2_IMM));
#if (defined SLJIT_MIPS_R1 && SLJIT_MIPS_R1)
if (op & SLJIT_SET_E)
FAIL_IF(push_inst(compiler, CLZ | S(src2) | TA(EQUAL_FLAG) | DA(EQUAL_FLAG), EQUAL_FLAG));
if (CHECK_FLAGS(SLJIT_SET_E))
FAIL_IF(push_inst(compiler, CLZ | S(src2) | T(dst) | D(dst), DR(dst)));
#else
if (SLJIT_UNLIKELY(flags & UNUSED_DEST)) {
FAIL_IF(push_inst(compiler, SRL | T(src2) | DA(EQUAL_FLAG) | SH_IMM(31), EQUAL_FLAG));
return push_inst(compiler, XORI | SA(EQUAL_FLAG) | TA(EQUAL_FLAG) | IMM(1), EQUAL_FLAG);
}
/* Nearly all instructions are unmovable in the following sequence. */
FAIL_IF(push_inst(compiler, ADDU | S(src2) | TA(0) | D(TMP_REG1), DR(TMP_REG1)));
/* Check zero. */
FAIL_IF(push_inst(compiler, BEQ | S(TMP_REG1) | TA(0) | IMM(5), UNMOVABLE_INS));
FAIL_IF(push_inst(compiler, ORI | SA(0) | T(dst) | IMM(32), UNMOVABLE_INS));
FAIL_IF(push_inst(compiler, ADDIU | SA(0) | T(dst) | IMM(-1), DR(dst)));
/* Loop for searching the highest bit. */
FAIL_IF(push_inst(compiler, ADDIU | S(dst) | T(dst) | IMM(1), DR(dst)));
FAIL_IF(push_inst(compiler, BGEZ | S(TMP_REG1) | IMM(-2), UNMOVABLE_INS));
FAIL_IF(push_inst(compiler, SLL | T(TMP_REG1) | D(TMP_REG1) | SH_IMM(1), UNMOVABLE_INS));
if (op & SLJIT_SET_E)
return push_inst(compiler, ADDU | S(dst) | TA(0) | DA(EQUAL_FLAG), EQUAL_FLAG);
#endif
return SLJIT_SUCCESS;
case SLJIT_ADD:
if (flags & SRC2_IMM) {
if (op & SLJIT_SET_O) {
if (src2 >= 0)
FAIL_IF(push_inst(compiler, OR | S(src1) | T(src1) | DA(OVERFLOW_FLAG), OVERFLOW_FLAG));
else
FAIL_IF(push_inst(compiler, NOR | S(src1) | T(src1) | DA(OVERFLOW_FLAG), OVERFLOW_FLAG));
}
if (op & SLJIT_SET_E)
FAIL_IF(push_inst(compiler, ADDIU | S(src1) | TA(EQUAL_FLAG) | IMM(src2), EQUAL_FLAG));
if (op & (SLJIT_SET_C | SLJIT_SET_O)) {
if (src2 >= 0)
FAIL_IF(push_inst(compiler, ORI | S(src1) | TA(ULESS_FLAG) | IMM(src2), ULESS_FLAG));
else {
FAIL_IF(push_inst(compiler, ADDIU | SA(0) | TA(ULESS_FLAG) | IMM(src2), ULESS_FLAG));
FAIL_IF(push_inst(compiler, OR | S(src1) | TA(ULESS_FLAG) | DA(ULESS_FLAG), ULESS_FLAG));
}
}
/* dst may be the same as src1 or src2. */
if (CHECK_FLAGS(SLJIT_SET_E))
FAIL_IF(push_inst(compiler, ADDIU | S(src1) | T(dst) | IMM(src2), DR(dst)));
}
else {
if (op & SLJIT_SET_O)
FAIL_IF(push_inst(compiler, XOR | S(src1) | T(src2) | DA(OVERFLOW_FLAG), OVERFLOW_FLAG));
if (op & SLJIT_SET_E)
FAIL_IF(push_inst(compiler, ADDU | S(src1) | T(src2) | DA(EQUAL_FLAG), EQUAL_FLAG));
if (op & (SLJIT_SET_C | SLJIT_SET_O))
FAIL_IF(push_inst(compiler, OR | S(src1) | T(src2) | DA(ULESS_FLAG), ULESS_FLAG));
/* dst may be the same as src1 or src2. */
if (CHECK_FLAGS(SLJIT_SET_E))
FAIL_IF(push_inst(compiler, ADDU | S(src1) | T(src2) | D(dst), DR(dst)));
}
/* a + b >= a | b (otherwise, the carry should be set to 1). */
if (op & (SLJIT_SET_C | SLJIT_SET_O))
FAIL_IF(push_inst(compiler, SLTU | S(dst) | TA(ULESS_FLAG) | DA(ULESS_FLAG), ULESS_FLAG));
if (!(op & SLJIT_SET_O))
return SLJIT_SUCCESS;
FAIL_IF(push_inst(compiler, SLL | TA(ULESS_FLAG) | D(TMP_REG1) | SH_IMM(31), DR(TMP_REG1)));
FAIL_IF(push_inst(compiler, XOR | S(TMP_REG1) | TA(OVERFLOW_FLAG) | DA(OVERFLOW_FLAG), OVERFLOW_FLAG));
FAIL_IF(push_inst(compiler, XOR | S(dst) | TA(OVERFLOW_FLAG) | DA(OVERFLOW_FLAG), OVERFLOW_FLAG));
return push_inst(compiler, SLL | TA(OVERFLOW_FLAG) | DA(OVERFLOW_FLAG) | SH_IMM(31), OVERFLOW_FLAG);
case SLJIT_ADDC:
if (flags & SRC2_IMM) {
if (op & SLJIT_SET_C) {
if (src2 >= 0)
FAIL_IF(push_inst(compiler, ORI | S(src1) | TA(OVERFLOW_FLAG) | IMM(src2), OVERFLOW_FLAG));
else {
FAIL_IF(push_inst(compiler, ADDIU | SA(0) | TA(OVERFLOW_FLAG) | IMM(src2), OVERFLOW_FLAG));
FAIL_IF(push_inst(compiler, OR | S(src1) | TA(OVERFLOW_FLAG) | DA(OVERFLOW_FLAG), OVERFLOW_FLAG));
}
}
FAIL_IF(push_inst(compiler, ADDIU | S(src1) | T(dst) | IMM(src2), DR(dst)));
} else {
if (op & SLJIT_SET_C)
FAIL_IF(push_inst(compiler, OR | S(src1) | T(src2) | DA(OVERFLOW_FLAG), OVERFLOW_FLAG));
/* dst may be the same as src1 or src2. */
FAIL_IF(push_inst(compiler, ADDU | S(src1) | T(src2) | D(dst), DR(dst)));
}
if (op & SLJIT_SET_C)
FAIL_IF(push_inst(compiler, SLTU | S(dst) | TA(OVERFLOW_FLAG) | DA(OVERFLOW_FLAG), OVERFLOW_FLAG));
FAIL_IF(push_inst(compiler, ADDU | S(dst) | TA(ULESS_FLAG) | D(dst), DR(dst)));
if (!(op & SLJIT_SET_C))
return SLJIT_SUCCESS;
/* Set ULESS_FLAG (dst == 0) && (ULESS_FLAG == 1). */
FAIL_IF(push_inst(compiler, SLTU | S(dst) | TA(ULESS_FLAG) | DA(ULESS_FLAG), ULESS_FLAG));
/* Set carry flag. */
return push_inst(compiler, OR | SA(ULESS_FLAG) | TA(OVERFLOW_FLAG) | DA(ULESS_FLAG), ULESS_FLAG);
case SLJIT_SUB:
if ((flags & SRC2_IMM) && ((op & (SLJIT_SET_U | SLJIT_SET_S)) || src2 == SIMM_MIN)) {
FAIL_IF(push_inst(compiler, ADDIU | SA(0) | T(TMP_REG2) | IMM(src2), DR(TMP_REG2)));
src2 = TMP_REG2;
flags &= ~SRC2_IMM;
}
if (flags & SRC2_IMM) {
if (op & SLJIT_SET_O) {
if (src2 >= 0)
FAIL_IF(push_inst(compiler, OR | S(src1) | T(src1) | DA(OVERFLOW_FLAG), OVERFLOW_FLAG));
else
FAIL_IF(push_inst(compiler, NOR | S(src1) | T(src1) | DA(OVERFLOW_FLAG), OVERFLOW_FLAG));
}
if (op & SLJIT_SET_E)
FAIL_IF(push_inst(compiler, ADDIU | S(src1) | TA(EQUAL_FLAG) | IMM(-src2), EQUAL_FLAG));
if (op & (SLJIT_SET_C | SLJIT_SET_O))
FAIL_IF(push_inst(compiler, SLTIU | S(src1) | TA(ULESS_FLAG) | IMM(src2), ULESS_FLAG));
/* dst may be the same as src1 or src2. */
if (CHECK_FLAGS(SLJIT_SET_E))
FAIL_IF(push_inst(compiler, ADDIU | S(src1) | T(dst) | IMM(-src2), DR(dst)));
}
else {
if (op & SLJIT_SET_O)
FAIL_IF(push_inst(compiler, XOR | S(src1) | T(src2) | DA(OVERFLOW_FLAG), OVERFLOW_FLAG));
if (op & SLJIT_SET_E)
FAIL_IF(push_inst(compiler, SUBU | S(src1) | T(src2) | DA(EQUAL_FLAG), EQUAL_FLAG));
if (op & (SLJIT_SET_U | SLJIT_SET_C | SLJIT_SET_O))
FAIL_IF(push_inst(compiler, SLTU | S(src1) | T(src2) | DA(ULESS_FLAG), ULESS_FLAG));
if (op & SLJIT_SET_U)
FAIL_IF(push_inst(compiler, SLTU | S(src2) | T(src1) | DA(UGREATER_FLAG), UGREATER_FLAG));
if (op & SLJIT_SET_S) {
FAIL_IF(push_inst(compiler, SLT | S(src1) | T(src2) | DA(LESS_FLAG), LESS_FLAG));
FAIL_IF(push_inst(compiler, SLT | S(src2) | T(src1) | DA(GREATER_FLAG), GREATER_FLAG));
}
/* dst may be the same as src1 or src2. */
if (CHECK_FLAGS(SLJIT_SET_E | SLJIT_SET_U | SLJIT_SET_S | SLJIT_SET_C))
FAIL_IF(push_inst(compiler, SUBU | S(src1) | T(src2) | D(dst), DR(dst)));
}
if (!(op & SLJIT_SET_O))
return SLJIT_SUCCESS;
FAIL_IF(push_inst(compiler, SLL | TA(ULESS_FLAG) | D(TMP_REG1) | SH_IMM(31), DR(TMP_REG1)));
FAIL_IF(push_inst(compiler, XOR | S(TMP_REG1) | TA(OVERFLOW_FLAG) | DA(OVERFLOW_FLAG), OVERFLOW_FLAG));
FAIL_IF(push_inst(compiler, XOR | S(dst) | TA(OVERFLOW_FLAG) | DA(OVERFLOW_FLAG), OVERFLOW_FLAG));
return push_inst(compiler, SRL | TA(OVERFLOW_FLAG) | DA(OVERFLOW_FLAG) | SH_IMM(31), OVERFLOW_FLAG);
case SLJIT_SUBC:
if ((flags & SRC2_IMM) && src2 == SIMM_MIN) {
FAIL_IF(push_inst(compiler, ADDIU | SA(0) | T(TMP_REG2) | IMM(src2), DR(TMP_REG2)));
src2 = TMP_REG2;
flags &= ~SRC2_IMM;
}
if (flags & SRC2_IMM) {
if (op & SLJIT_SET_C)
FAIL_IF(push_inst(compiler, SLTIU | S(src1) | TA(OVERFLOW_FLAG) | IMM(src2), OVERFLOW_FLAG));
/* dst may be the same as src1 or src2. */
FAIL_IF(push_inst(compiler, ADDIU | S(src1) | T(dst) | IMM(-src2), DR(dst)));
}
else {
if (op & SLJIT_SET_C)
FAIL_IF(push_inst(compiler, SLTU | S(src1) | T(src2) | DA(OVERFLOW_FLAG), OVERFLOW_FLAG));
/* dst may be the same as src1 or src2. */
FAIL_IF(push_inst(compiler, SUBU | S(src1) | T(src2) | D(dst), DR(dst)));
}
if (op & SLJIT_SET_C)
FAIL_IF(push_inst(compiler, SLTU | S(dst) | TA(ULESS_FLAG) | DA(LESS_FLAG), LESS_FLAG));
FAIL_IF(push_inst(compiler, SUBU | S(dst) | TA(ULESS_FLAG) | D(dst), DR(dst)));
return (op & SLJIT_SET_C) ? push_inst(compiler, OR | SA(OVERFLOW_FLAG) | TA(LESS_FLAG) | DA(ULESS_FLAG), ULESS_FLAG) : SLJIT_SUCCESS;
case SLJIT_MUL:
SLJIT_ASSERT(!(flags & SRC2_IMM));
if (!(op & SLJIT_SET_O)) {
#if (defined SLJIT_MIPS_R1 && SLJIT_MIPS_R1)
return push_inst(compiler, MUL | S(src1) | T(src2) | D(dst), DR(dst));
#else
FAIL_IF(push_inst(compiler, MULT | S(src1) | T(src2), MOVABLE_INS));
return push_inst(compiler, MFLO | D(dst), DR(dst));
#endif
}
FAIL_IF(push_inst(compiler, MULT | S(src1) | T(src2), MOVABLE_INS));
FAIL_IF(push_inst(compiler, MFHI | DA(ULESS_FLAG), ULESS_FLAG));
FAIL_IF(push_inst(compiler, MFLO | D(dst), DR(dst)));
FAIL_IF(push_inst(compiler, SRA | T(dst) | DA(UGREATER_FLAG) | SH_IMM(31), UGREATER_FLAG));
return push_inst(compiler, SUBU | SA(ULESS_FLAG) | TA(UGREATER_FLAG) | DA(OVERFLOW_FLAG), OVERFLOW_FLAG);
case SLJIT_AND:
EMIT_LOGICAL(ANDI, AND);
return SLJIT_SUCCESS;
case SLJIT_OR:
EMIT_LOGICAL(ORI, OR);
return SLJIT_SUCCESS;
case SLJIT_XOR:
EMIT_LOGICAL(XORI, XOR);
return SLJIT_SUCCESS;
case SLJIT_SHL:
EMIT_SHIFT(SLL, SLLV);
return SLJIT_SUCCESS;
case SLJIT_LSHR:
EMIT_SHIFT(SRL, SRLV);
return SLJIT_SUCCESS;
case SLJIT_ASHR:
EMIT_SHIFT(SRA, SRAV);
return SLJIT_SUCCESS;
}
SLJIT_ASSERT_STOP();
return SLJIT_SUCCESS;
}
static SLJIT_INLINE sljit_si emit_const(struct sljit_compiler *compiler, sljit_si dst, sljit_sw init_value)
{
FAIL_IF(push_inst(compiler, LUI | T(dst) | IMM(init_value >> 16), DR(dst)));
return push_inst(compiler, ORI | S(dst) | T(dst) | IMM(init_value), DR(dst));
}
SLJIT_API_FUNC_ATTRIBUTE void sljit_set_jump_addr(sljit_uw addr, sljit_uw new_addr)
{
sljit_ins *inst = (sljit_ins*)addr;
inst[0] = (inst[0] & 0xffff0000) | ((new_addr >> 16) & 0xffff);
inst[1] = (inst[1] & 0xffff0000) | (new_addr & 0xffff);
SLJIT_CACHE_FLUSH(inst, inst + 2);
}
SLJIT_API_FUNC_ATTRIBUTE void sljit_set_const(sljit_uw addr, sljit_sw new_constant)
{
sljit_ins *inst = (sljit_ins*)addr;
inst[0] = (inst[0] & 0xffff0000) | ((new_constant >> 16) & 0xffff);
inst[1] = (inst[1] & 0xffff0000) | (new_constant & 0xffff);
SLJIT_CACHE_FLUSH(inst, inst + 2);
}

View File

@ -0,0 +1,469 @@
/*
* Stack-less Just-In-Time compiler
*
* Copyright 2009-2012 Zoltan Herczeg (hzmester@freemail.hu). All rights reserved.
*
* Redistribution and use in source and binary forms, with or without modification, are
* permitted provided that the following conditions are met:
*
* 1. Redistributions of source code must retain the above copyright notice, this list of
* conditions and the following disclaimer.
*
* 2. Redistributions in binary form must reproduce the above copyright notice, this list
* of conditions and the following disclaimer in the documentation and/or other materials
* provided with the distribution.
*
* THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDER(S) AND CONTRIBUTORS ``AS IS'' AND ANY
* EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES
* OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT
* SHALL THE COPYRIGHT HOLDER(S) OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT,
* INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED
* TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR
* BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
* CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN
* ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
*/
/* mips 64-bit arch dependent functions. */
static sljit_si load_immediate(struct sljit_compiler *compiler, sljit_si dst_ar, sljit_sw imm)
{
sljit_si shift = 32;
sljit_si shift2;
sljit_si inv = 0;
sljit_ins ins;
sljit_uw uimm;
if (!(imm & ~0xffff))
return push_inst(compiler, ORI | SA(0) | TA(dst_ar) | IMM(imm), dst_ar);
if (imm < 0 && imm >= SIMM_MIN)
return push_inst(compiler, ADDIU | SA(0) | TA(dst_ar) | IMM(imm), dst_ar);
if (imm <= 0x7fffffffl && imm >= -0x80000000l) {
FAIL_IF(push_inst(compiler, LUI | TA(dst_ar) | IMM(imm >> 16), dst_ar));
return (imm & 0xffff) ? push_inst(compiler, ORI | SA(dst_ar) | TA(dst_ar) | IMM(imm), dst_ar) : SLJIT_SUCCESS;
}
/* Zero extended number. */
uimm = imm;
if (imm < 0) {
uimm = ~imm;
inv = 1;
}
while (!(uimm & 0xff00000000000000l)) {
shift -= 8;
uimm <<= 8;
}
if (!(uimm & 0xf000000000000000l)) {
shift -= 4;
uimm <<= 4;
}
if (!(uimm & 0xc000000000000000l)) {
shift -= 2;
uimm <<= 2;
}
if ((sljit_sw)uimm < 0) {
uimm >>= 1;
shift += 1;
}
SLJIT_ASSERT(((uimm & 0xc000000000000000l) == 0x4000000000000000l) && (shift > 0) && (shift <= 32));
if (inv)
uimm = ~uimm;
FAIL_IF(push_inst(compiler, LUI | TA(dst_ar) | IMM(uimm >> 48), dst_ar));
if (uimm & 0x0000ffff00000000l)
FAIL_IF(push_inst(compiler, ORI | SA(dst_ar) | TA(dst_ar) | IMM(uimm >> 32), dst_ar));
imm &= (1l << shift) - 1;
if (!(imm & ~0xffff)) {
ins = (shift == 32) ? DSLL32 : DSLL;
if (shift < 32)
ins |= SH_IMM(shift);
FAIL_IF(push_inst(compiler, ins | TA(dst_ar) | DA(dst_ar), dst_ar));
return !(imm & 0xffff) ? SLJIT_SUCCESS : push_inst(compiler, ORI | SA(dst_ar) | TA(dst_ar) | IMM(imm), dst_ar);
}
/* Double shifts needs to be performed. */
uimm <<= 32;
shift2 = shift - 16;
while (!(uimm & 0xf000000000000000l)) {
shift2 -= 4;
uimm <<= 4;
}
if (!(uimm & 0xc000000000000000l)) {
shift2 -= 2;
uimm <<= 2;
}
if (!(uimm & 0x8000000000000000l)) {
shift2--;
uimm <<= 1;
}
SLJIT_ASSERT((uimm & 0x8000000000000000l) && (shift2 > 0) && (shift2 <= 16));
FAIL_IF(push_inst(compiler, DSLL | TA(dst_ar) | DA(dst_ar) | SH_IMM(shift - shift2), dst_ar));
FAIL_IF(push_inst(compiler, ORI | SA(dst_ar) | TA(dst_ar) | IMM(uimm >> 48), dst_ar));
FAIL_IF(push_inst(compiler, DSLL | TA(dst_ar) | DA(dst_ar) | SH_IMM(shift2), dst_ar));
imm &= (1l << shift2) - 1;
return !(imm & 0xffff) ? SLJIT_SUCCESS : push_inst(compiler, ORI | SA(dst_ar) | TA(dst_ar) | IMM(imm), dst_ar);
}
#define SELECT_OP(a, b) \
(!(op & SLJIT_INT_OP) ? a : b)
#define EMIT_LOGICAL(op_imm, op_norm) \
if (flags & SRC2_IMM) { \
if (op & SLJIT_SET_E) \
FAIL_IF(push_inst(compiler, op_imm | S(src1) | TA(EQUAL_FLAG) | IMM(src2), EQUAL_FLAG)); \
if (CHECK_FLAGS(SLJIT_SET_E)) \
FAIL_IF(push_inst(compiler, op_imm | S(src1) | T(dst) | IMM(src2), DR(dst))); \
} \
else { \
if (op & SLJIT_SET_E) \
FAIL_IF(push_inst(compiler, op_norm | S(src1) | T(src2) | DA(EQUAL_FLAG), EQUAL_FLAG)); \
if (CHECK_FLAGS(SLJIT_SET_E)) \
FAIL_IF(push_inst(compiler, op_norm | S(src1) | T(src2) | D(dst), DR(dst))); \
}
#define EMIT_SHIFT(op_dimm, op_dimm32, op_imm, op_dv, op_v) \
if (flags & SRC2_IMM) { \
if (src2 >= 32) { \
SLJIT_ASSERT(!(op & SLJIT_INT_OP)); \
ins = op_dimm32; \
src2 -= 32; \
} \
else \
ins = (op & SLJIT_INT_OP) ? op_imm : op_dimm; \
if (op & SLJIT_SET_E) \
FAIL_IF(push_inst(compiler, ins | T(src1) | DA(EQUAL_FLAG) | SH_IMM(src2), EQUAL_FLAG)); \
if (CHECK_FLAGS(SLJIT_SET_E)) \
FAIL_IF(push_inst(compiler, ins | T(src1) | D(dst) | SH_IMM(src2), DR(dst))); \
} \
else { \
ins = (op & SLJIT_INT_OP) ? op_v : op_dv; \
if (op & SLJIT_SET_E) \
FAIL_IF(push_inst(compiler, ins | S(src2) | T(src1) | DA(EQUAL_FLAG), EQUAL_FLAG)); \
if (CHECK_FLAGS(SLJIT_SET_E)) \
FAIL_IF(push_inst(compiler, ins | S(src2) | T(src1) | D(dst), DR(dst))); \
}
static SLJIT_INLINE sljit_si emit_single_op(struct sljit_compiler *compiler, sljit_si op, sljit_si flags,
sljit_si dst, sljit_si src1, sljit_sw src2)
{
sljit_ins ins;
switch (GET_OPCODE(op)) {
case SLJIT_MOV:
case SLJIT_MOV_P:
SLJIT_ASSERT(src1 == TMP_REG1 && !(flags & SRC2_IMM));
if (dst != src2)
return push_inst(compiler, SELECT_OP(DADDU, ADDU) | S(src2) | TA(0) | D(dst), DR(dst));
return SLJIT_SUCCESS;
case SLJIT_MOV_UB:
case SLJIT_MOV_SB:
SLJIT_ASSERT(src1 == TMP_REG1 && !(flags & SRC2_IMM));
if ((flags & (REG_DEST | REG2_SOURCE)) == (REG_DEST | REG2_SOURCE)) {
if (op == SLJIT_MOV_SB) {
FAIL_IF(push_inst(compiler, DSLL32 | T(src2) | D(dst) | SH_IMM(24), DR(dst)));
return push_inst(compiler, DSRA32 | T(dst) | D(dst) | SH_IMM(24), DR(dst));
}
return push_inst(compiler, ANDI | S(src2) | T(dst) | IMM(0xff), DR(dst));
}
else if (dst != src2)
SLJIT_ASSERT_STOP();
return SLJIT_SUCCESS;
case SLJIT_MOV_UH:
case SLJIT_MOV_SH:
SLJIT_ASSERT(src1 == TMP_REG1 && !(flags & SRC2_IMM));
if ((flags & (REG_DEST | REG2_SOURCE)) == (REG_DEST | REG2_SOURCE)) {
if (op == SLJIT_MOV_SH) {
FAIL_IF(push_inst(compiler, DSLL32 | T(src2) | D(dst) | SH_IMM(16), DR(dst)));
return push_inst(compiler, DSRA32 | T(dst) | D(dst) | SH_IMM(16), DR(dst));
}
return push_inst(compiler, ANDI | S(src2) | T(dst) | IMM(0xffff), DR(dst));
}
else if (dst != src2)
SLJIT_ASSERT_STOP();
return SLJIT_SUCCESS;
case SLJIT_MOV_UI:
SLJIT_ASSERT(!(op & SLJIT_INT_OP));
FAIL_IF(push_inst(compiler, DSLL32 | T(src2) | D(dst) | SH_IMM(0), DR(dst)));
return push_inst(compiler, DSRL32 | T(dst) | D(dst) | SH_IMM(0), DR(dst));
case SLJIT_MOV_SI:
SLJIT_ASSERT(src1 == TMP_REG1 && !(flags & SRC2_IMM));
return push_inst(compiler, SLL | T(src2) | D(dst) | SH_IMM(0), DR(dst));
case SLJIT_NOT:
SLJIT_ASSERT(src1 == TMP_REG1 && !(flags & SRC2_IMM));
if (op & SLJIT_SET_E)
FAIL_IF(push_inst(compiler, NOR | S(src2) | T(src2) | DA(EQUAL_FLAG), EQUAL_FLAG));
if (CHECK_FLAGS(SLJIT_SET_E))
FAIL_IF(push_inst(compiler, NOR | S(src2) | T(src2) | D(dst), DR(dst)));
return SLJIT_SUCCESS;
case SLJIT_CLZ:
SLJIT_ASSERT(src1 == TMP_REG1 && !(flags & SRC2_IMM));
#if (defined SLJIT_MIPS_R1 && SLJIT_MIPS_R1)
if (op & SLJIT_SET_E)
FAIL_IF(push_inst(compiler, SELECT_OP(DCLZ, CLZ) | S(src2) | TA(EQUAL_FLAG) | DA(EQUAL_FLAG), EQUAL_FLAG));
if (CHECK_FLAGS(SLJIT_SET_E))
FAIL_IF(push_inst(compiler, SELECT_OP(DCLZ, CLZ) | S(src2) | T(dst) | D(dst), DR(dst)));
#else
if (SLJIT_UNLIKELY(flags & UNUSED_DEST)) {
FAIL_IF(push_inst(compiler, SELECT_OP(DSRL32, SRL) | T(src2) | DA(EQUAL_FLAG) | SH_IMM(31), EQUAL_FLAG));
return push_inst(compiler, XORI | SA(EQUAL_FLAG) | TA(EQUAL_FLAG) | IMM(1), EQUAL_FLAG);
}
/* Nearly all instructions are unmovable in the following sequence. */
FAIL_IF(push_inst(compiler, SELECT_OP(DADDU, ADDU) | S(src2) | TA(0) | D(TMP_REG1), DR(TMP_REG1)));
/* Check zero. */
FAIL_IF(push_inst(compiler, BEQ | S(TMP_REG1) | TA(0) | IMM(5), UNMOVABLE_INS));
FAIL_IF(push_inst(compiler, ORI | SA(0) | T(dst) | IMM((op & SLJIT_INT_OP) ? 32 : 64), UNMOVABLE_INS));
FAIL_IF(push_inst(compiler, SELECT_OP(DADDIU, ADDIU) | SA(0) | T(dst) | IMM(-1), DR(dst)));
/* Loop for searching the highest bit. */
FAIL_IF(push_inst(compiler, SELECT_OP(DADDIU, ADDIU) | S(dst) | T(dst) | IMM(1), DR(dst)));
FAIL_IF(push_inst(compiler, BGEZ | S(TMP_REG1) | IMM(-2), UNMOVABLE_INS));
FAIL_IF(push_inst(compiler, SELECT_OP(DSLL, SLL) | T(TMP_REG1) | D(TMP_REG1) | SH_IMM(1), UNMOVABLE_INS));
if (op & SLJIT_SET_E)
return push_inst(compiler, SELECT_OP(DADDU, ADDU) | S(dst) | TA(0) | DA(EQUAL_FLAG), EQUAL_FLAG);
#endif
return SLJIT_SUCCESS;
case SLJIT_ADD:
if (flags & SRC2_IMM) {
if (op & SLJIT_SET_O) {
if (src2 >= 0)
FAIL_IF(push_inst(compiler, OR | S(src1) | T(src1) | DA(OVERFLOW_FLAG), OVERFLOW_FLAG));
else
FAIL_IF(push_inst(compiler, NOR | S(src1) | T(src1) | DA(OVERFLOW_FLAG), OVERFLOW_FLAG));
}
if (op & SLJIT_SET_E)
FAIL_IF(push_inst(compiler, SELECT_OP(DADDIU, ADDIU) | S(src1) | TA(EQUAL_FLAG) | IMM(src2), EQUAL_FLAG));
if (op & (SLJIT_SET_C | SLJIT_SET_O)) {
if (src2 >= 0)
FAIL_IF(push_inst(compiler, ORI | S(src1) | TA(ULESS_FLAG) | IMM(src2), ULESS_FLAG));
else {
FAIL_IF(push_inst(compiler, SELECT_OP(DADDIU, ADDIU) | SA(0) | TA(ULESS_FLAG) | IMM(src2), ULESS_FLAG));
FAIL_IF(push_inst(compiler, OR | S(src1) | TA(ULESS_FLAG) | DA(ULESS_FLAG), ULESS_FLAG));
}
}
/* dst may be the same as src1 or src2. */
if (CHECK_FLAGS(SLJIT_SET_E))
FAIL_IF(push_inst(compiler, SELECT_OP(DADDIU, ADDIU) | S(src1) | T(dst) | IMM(src2), DR(dst)));
}
else {
if (op & SLJIT_SET_O)
FAIL_IF(push_inst(compiler, XOR | S(src1) | T(src2) | DA(OVERFLOW_FLAG), OVERFLOW_FLAG));
if (op & SLJIT_SET_E)
FAIL_IF(push_inst(compiler, SELECT_OP(DADDU, ADDU) | S(src1) | T(src2) | DA(EQUAL_FLAG), EQUAL_FLAG));
if (op & (SLJIT_SET_C | SLJIT_SET_O))
FAIL_IF(push_inst(compiler, OR | S(src1) | T(src2) | DA(ULESS_FLAG), ULESS_FLAG));
/* dst may be the same as src1 or src2. */
if (CHECK_FLAGS(SLJIT_SET_E))
FAIL_IF(push_inst(compiler, SELECT_OP(DADDU, ADDU) | S(src1) | T(src2) | D(dst), DR(dst)));
}
/* a + b >= a | b (otherwise, the carry should be set to 1). */
if (op & (SLJIT_SET_C | SLJIT_SET_O))
FAIL_IF(push_inst(compiler, SLTU | S(dst) | TA(ULESS_FLAG) | DA(ULESS_FLAG), ULESS_FLAG));
if (!(op & SLJIT_SET_O))
return SLJIT_SUCCESS;
FAIL_IF(push_inst(compiler, SELECT_OP(DSLL32, SLL) | TA(ULESS_FLAG) | D(TMP_REG1) | SH_IMM(31), DR(TMP_REG1)));
FAIL_IF(push_inst(compiler, XOR | S(TMP_REG1) | TA(OVERFLOW_FLAG) | DA(OVERFLOW_FLAG), OVERFLOW_FLAG));
FAIL_IF(push_inst(compiler, XOR | S(dst) | TA(OVERFLOW_FLAG) | DA(OVERFLOW_FLAG), OVERFLOW_FLAG));
return push_inst(compiler, SELECT_OP(DSRL32, SLL) | TA(OVERFLOW_FLAG) | DA(OVERFLOW_FLAG) | SH_IMM(31), OVERFLOW_FLAG);
case SLJIT_ADDC:
if (flags & SRC2_IMM) {
if (op & SLJIT_SET_C) {
if (src2 >= 0)
FAIL_IF(push_inst(compiler, ORI | S(src1) | TA(OVERFLOW_FLAG) | IMM(src2), OVERFLOW_FLAG));
else {
FAIL_IF(push_inst(compiler, SELECT_OP(DADDIU, ADDIU) | SA(0) | TA(OVERFLOW_FLAG) | IMM(src2), OVERFLOW_FLAG));
FAIL_IF(push_inst(compiler, OR | S(src1) | TA(OVERFLOW_FLAG) | DA(OVERFLOW_FLAG), OVERFLOW_FLAG));
}
}
FAIL_IF(push_inst(compiler, SELECT_OP(DADDIU, ADDIU) | S(src1) | T(dst) | IMM(src2), DR(dst)));
} else {
if (op & SLJIT_SET_C)
FAIL_IF(push_inst(compiler, OR | S(src1) | T(src2) | DA(OVERFLOW_FLAG), OVERFLOW_FLAG));
/* dst may be the same as src1 or src2. */
FAIL_IF(push_inst(compiler, SELECT_OP(DADDU, ADDU) | S(src1) | T(src2) | D(dst), DR(dst)));
}
if (op & SLJIT_SET_C)
FAIL_IF(push_inst(compiler, SLTU | S(dst) | TA(OVERFLOW_FLAG) | DA(OVERFLOW_FLAG), OVERFLOW_FLAG));
FAIL_IF(push_inst(compiler, SELECT_OP(DADDU, ADDU) | S(dst) | TA(ULESS_FLAG) | D(dst), DR(dst)));
if (!(op & SLJIT_SET_C))
return SLJIT_SUCCESS;
/* Set ULESS_FLAG (dst == 0) && (ULESS_FLAG == 1). */
FAIL_IF(push_inst(compiler, SLTU | S(dst) | TA(ULESS_FLAG) | DA(ULESS_FLAG), ULESS_FLAG));
/* Set carry flag. */
return push_inst(compiler, OR | SA(ULESS_FLAG) | TA(OVERFLOW_FLAG) | DA(ULESS_FLAG), ULESS_FLAG);
case SLJIT_SUB:
if ((flags & SRC2_IMM) && ((op & (SLJIT_SET_U | SLJIT_SET_S)) || src2 == SIMM_MIN)) {
FAIL_IF(push_inst(compiler, ADDIU | SA(0) | T(TMP_REG2) | IMM(src2), DR(TMP_REG2)));
src2 = TMP_REG2;
flags &= ~SRC2_IMM;
}
if (flags & SRC2_IMM) {
if (op & SLJIT_SET_O) {
if (src2 >= 0)
FAIL_IF(push_inst(compiler, OR | S(src1) | T(src1) | DA(OVERFLOW_FLAG), OVERFLOW_FLAG));
else
FAIL_IF(push_inst(compiler, NOR | S(src1) | T(src1) | DA(OVERFLOW_FLAG), OVERFLOW_FLAG));
}
if (op & SLJIT_SET_E)
FAIL_IF(push_inst(compiler, SELECT_OP(DADDIU, ADDIU) | S(src1) | TA(EQUAL_FLAG) | IMM(-src2), EQUAL_FLAG));
if (op & (SLJIT_SET_C | SLJIT_SET_O))
FAIL_IF(push_inst(compiler, SLTIU | S(src1) | TA(ULESS_FLAG) | IMM(src2), ULESS_FLAG));
/* dst may be the same as src1 or src2. */
if (CHECK_FLAGS(SLJIT_SET_E))
FAIL_IF(push_inst(compiler, SELECT_OP(DADDIU, ADDIU) | S(src1) | T(dst) | IMM(-src2), DR(dst)));
}
else {
if (op & SLJIT_SET_O)
FAIL_IF(push_inst(compiler, XOR | S(src1) | T(src2) | DA(OVERFLOW_FLAG), OVERFLOW_FLAG));
if (op & SLJIT_SET_E)
FAIL_IF(push_inst(compiler, SELECT_OP(DSUBU, SUBU) | S(src1) | T(src2) | DA(EQUAL_FLAG), EQUAL_FLAG));
if (op & (SLJIT_SET_U | SLJIT_SET_C | SLJIT_SET_O))
FAIL_IF(push_inst(compiler, SLTU | S(src1) | T(src2) | DA(ULESS_FLAG), ULESS_FLAG));
if (op & SLJIT_SET_U)
FAIL_IF(push_inst(compiler, SLTU | S(src2) | T(src1) | DA(UGREATER_FLAG), UGREATER_FLAG));
if (op & SLJIT_SET_S) {
FAIL_IF(push_inst(compiler, SLT | S(src1) | T(src2) | DA(LESS_FLAG), LESS_FLAG));
FAIL_IF(push_inst(compiler, SLT | S(src2) | T(src1) | DA(GREATER_FLAG), GREATER_FLAG));
}
/* dst may be the same as src1 or src2. */
if (CHECK_FLAGS(SLJIT_SET_E | SLJIT_SET_U | SLJIT_SET_S | SLJIT_SET_C))
FAIL_IF(push_inst(compiler, SELECT_OP(DSUBU, SUBU) | S(src1) | T(src2) | D(dst), DR(dst)));
}
if (!(op & SLJIT_SET_O))
return SLJIT_SUCCESS;
FAIL_IF(push_inst(compiler, SELECT_OP(DSLL32, SLL) | TA(ULESS_FLAG) | D(TMP_REG1) | SH_IMM(31), DR(TMP_REG1)));
FAIL_IF(push_inst(compiler, XOR | S(TMP_REG1) | TA(OVERFLOW_FLAG) | DA(OVERFLOW_FLAG), OVERFLOW_FLAG));
FAIL_IF(push_inst(compiler, XOR | S(dst) | TA(OVERFLOW_FLAG) | DA(OVERFLOW_FLAG), OVERFLOW_FLAG));
return push_inst(compiler, SELECT_OP(DSRL32, SRL) | TA(OVERFLOW_FLAG) | DA(OVERFLOW_FLAG) | SH_IMM(31), OVERFLOW_FLAG);
case SLJIT_SUBC:
if ((flags & SRC2_IMM) && src2 == SIMM_MIN) {
FAIL_IF(push_inst(compiler, ADDIU | SA(0) | T(TMP_REG2) | IMM(src2), DR(TMP_REG2)));
src2 = TMP_REG2;
flags &= ~SRC2_IMM;
}
if (flags & SRC2_IMM) {
if (op & SLJIT_SET_C)
FAIL_IF(push_inst(compiler, SLTIU | S(src1) | TA(OVERFLOW_FLAG) | IMM(src2), OVERFLOW_FLAG));
/* dst may be the same as src1 or src2. */
FAIL_IF(push_inst(compiler, SELECT_OP(DADDIU, ADDIU) | S(src1) | T(dst) | IMM(-src2), DR(dst)));
}
else {
if (op & SLJIT_SET_C)
FAIL_IF(push_inst(compiler, SLTU | S(src1) | T(src2) | DA(OVERFLOW_FLAG), OVERFLOW_FLAG));
/* dst may be the same as src1 or src2. */
FAIL_IF(push_inst(compiler, SELECT_OP(DSUBU, SUBU) | S(src1) | T(src2) | D(dst), DR(dst)));
}
if (op & SLJIT_SET_C)
FAIL_IF(push_inst(compiler, SLTU | S(dst) | TA(ULESS_FLAG) | DA(LESS_FLAG), LESS_FLAG));
FAIL_IF(push_inst(compiler, SELECT_OP(DSUBU, SUBU) | S(dst) | TA(ULESS_FLAG) | D(dst), DR(dst)));
return (op & SLJIT_SET_C) ? push_inst(compiler, OR | SA(OVERFLOW_FLAG) | TA(LESS_FLAG) | DA(ULESS_FLAG), ULESS_FLAG) : SLJIT_SUCCESS;
case SLJIT_MUL:
SLJIT_ASSERT(!(flags & SRC2_IMM));
if (!(op & SLJIT_SET_O)) {
#if (defined SLJIT_MIPS_R1 && SLJIT_MIPS_R1)
if (op & SLJIT_INT_OP)
return push_inst(compiler, MUL | S(src1) | T(src2) | D(dst), DR(dst));
FAIL_IF(push_inst(compiler, DMULT | S(src1) | T(src2), MOVABLE_INS));
return push_inst(compiler, MFLO | D(dst), DR(dst));
#else
FAIL_IF(push_inst(compiler, SELECT_OP(DMULT, MULT) | S(src1) | T(src2), MOVABLE_INS));
return push_inst(compiler, MFLO | D(dst), DR(dst));
#endif
}
FAIL_IF(push_inst(compiler, SELECT_OP(DMULT, MULT) | S(src1) | T(src2), MOVABLE_INS));
FAIL_IF(push_inst(compiler, MFHI | DA(ULESS_FLAG), ULESS_FLAG));
FAIL_IF(push_inst(compiler, MFLO | D(dst), DR(dst)));
FAIL_IF(push_inst(compiler, SELECT_OP(DSRA32, SRA) | T(dst) | DA(UGREATER_FLAG) | SH_IMM(31), UGREATER_FLAG));
return push_inst(compiler, SELECT_OP(DSUBU, SUBU) | SA(ULESS_FLAG) | TA(UGREATER_FLAG) | DA(OVERFLOW_FLAG), OVERFLOW_FLAG);
case SLJIT_AND:
EMIT_LOGICAL(ANDI, AND);
return SLJIT_SUCCESS;
case SLJIT_OR:
EMIT_LOGICAL(ORI, OR);
return SLJIT_SUCCESS;
case SLJIT_XOR:
EMIT_LOGICAL(XORI, XOR);
return SLJIT_SUCCESS;
case SLJIT_SHL:
EMIT_SHIFT(DSLL, DSLL32, SLL, DSLLV, SLLV);
return SLJIT_SUCCESS;
case SLJIT_LSHR:
EMIT_SHIFT(DSRL, DSRL32, SRL, DSRLV, SRLV);
return SLJIT_SUCCESS;
case SLJIT_ASHR:
EMIT_SHIFT(DSRA, DSRA32, SRA, DSRAV, SRAV);
return SLJIT_SUCCESS;
}
SLJIT_ASSERT_STOP();
return SLJIT_SUCCESS;
}
static SLJIT_INLINE sljit_si emit_const(struct sljit_compiler *compiler, sljit_si dst, sljit_sw init_value)
{
FAIL_IF(push_inst(compiler, LUI | T(dst) | IMM(init_value >> 48), DR(dst)));
FAIL_IF(push_inst(compiler, ORI | S(dst) | T(dst) | IMM(init_value >> 32), DR(dst)));
FAIL_IF(push_inst(compiler, DSLL | T(dst) | D(dst) | SH_IMM(16), DR(dst)));
FAIL_IF(push_inst(compiler, ORI | S(dst) | T(dst) | IMM(init_value >> 16), DR(dst)));
FAIL_IF(push_inst(compiler, DSLL | T(dst) | D(dst) | SH_IMM(16), DR(dst)));
return push_inst(compiler, ORI | S(dst) | T(dst) | IMM(init_value), DR(dst));
}
SLJIT_API_FUNC_ATTRIBUTE void sljit_set_jump_addr(sljit_uw addr, sljit_uw new_addr)
{
sljit_ins *inst = (sljit_ins*)addr;
inst[0] = (inst[0] & 0xffff0000) | ((new_addr >> 48) & 0xffff);
inst[1] = (inst[1] & 0xffff0000) | ((new_addr >> 32) & 0xffff);
inst[3] = (inst[3] & 0xffff0000) | ((new_addr >> 16) & 0xffff);
inst[5] = (inst[5] & 0xffff0000) | (new_addr & 0xffff);
SLJIT_CACHE_FLUSH(inst, inst + 6);
}
SLJIT_API_FUNC_ATTRIBUTE void sljit_set_const(sljit_uw addr, sljit_sw new_constant)
{
sljit_ins *inst = (sljit_ins*)addr;
inst[0] = (inst[0] & 0xffff0000) | ((new_constant >> 48) & 0xffff);
inst[1] = (inst[1] & 0xffff0000) | ((new_constant >> 32) & 0xffff);
inst[3] = (inst[3] & 0xffff0000) | ((new_constant >> 16) & 0xffff);
inst[5] = (inst[5] & 0xffff0000) | (new_constant & 0xffff);
SLJIT_CACHE_FLUSH(inst, inst + 6);
}

File diff suppressed because it is too large Load Diff

View File

@ -0,0 +1,269 @@
/*
* Stack-less Just-In-Time compiler
*
* Copyright 2009-2012 Zoltan Herczeg (hzmester@freemail.hu). All rights reserved.
*
* Redistribution and use in source and binary forms, with or without modification, are
* permitted provided that the following conditions are met:
*
* 1. Redistributions of source code must retain the above copyright notice, this list of
* conditions and the following disclaimer.
*
* 2. Redistributions in binary form must reproduce the above copyright notice, this list
* of conditions and the following disclaimer in the documentation and/or other materials
* provided with the distribution.
*
* THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDER(S) AND CONTRIBUTORS ``AS IS'' AND ANY
* EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES
* OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT
* SHALL THE COPYRIGHT HOLDER(S) OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT,
* INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED
* TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR
* BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
* CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN
* ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
*/
/* ppc 32-bit arch dependent functions. */
static sljit_si load_immediate(struct sljit_compiler *compiler, sljit_si reg, sljit_sw imm)
{
if (imm <= SIMM_MAX && imm >= SIMM_MIN)
return push_inst(compiler, ADDI | D(reg) | A(0) | IMM(imm));
if (!(imm & ~0xffff))
return push_inst(compiler, ORI | S(TMP_ZERO) | A(reg) | IMM(imm));
FAIL_IF(push_inst(compiler, ADDIS | D(reg) | A(0) | IMM(imm >> 16)));
return (imm & 0xffff) ? push_inst(compiler, ORI | S(reg) | A(reg) | IMM(imm)) : SLJIT_SUCCESS;
}
#define INS_CLEAR_LEFT(dst, src, from) \
(RLWINM | S(src) | A(dst) | ((from) << 6) | (31 << 1))
static SLJIT_INLINE sljit_si emit_single_op(struct sljit_compiler *compiler, sljit_si op, sljit_si flags,
sljit_si dst, sljit_si src1, sljit_si src2)
{
switch (op) {
case SLJIT_MOV:
case SLJIT_MOV_UI:
case SLJIT_MOV_SI:
case SLJIT_MOV_P:
SLJIT_ASSERT(src1 == TMP_REG1);
if (dst != src2)
return push_inst(compiler, OR | S(src2) | A(dst) | B(src2));
return SLJIT_SUCCESS;
case SLJIT_MOV_UB:
case SLJIT_MOV_SB:
SLJIT_ASSERT(src1 == TMP_REG1);
if ((flags & (REG_DEST | REG2_SOURCE)) == (REG_DEST | REG2_SOURCE)) {
if (op == SLJIT_MOV_SB)
return push_inst(compiler, EXTSB | S(src2) | A(dst));
return push_inst(compiler, INS_CLEAR_LEFT(dst, src2, 24));
}
else if ((flags & REG_DEST) && op == SLJIT_MOV_SB)
return push_inst(compiler, EXTSB | S(src2) | A(dst));
else {
SLJIT_ASSERT(dst == src2);
}
return SLJIT_SUCCESS;
case SLJIT_MOV_UH:
case SLJIT_MOV_SH:
SLJIT_ASSERT(src1 == TMP_REG1);
if ((flags & (REG_DEST | REG2_SOURCE)) == (REG_DEST | REG2_SOURCE)) {
if (op == SLJIT_MOV_SH)
return push_inst(compiler, EXTSH | S(src2) | A(dst));
return push_inst(compiler, INS_CLEAR_LEFT(dst, src2, 16));
}
else {
SLJIT_ASSERT(dst == src2);
}
return SLJIT_SUCCESS;
case SLJIT_NOT:
SLJIT_ASSERT(src1 == TMP_REG1);
return push_inst(compiler, NOR | RC(flags) | S(src2) | A(dst) | B(src2));
case SLJIT_NEG:
SLJIT_ASSERT(src1 == TMP_REG1);
return push_inst(compiler, NEG | OERC(flags) | D(dst) | A(src2));
case SLJIT_CLZ:
SLJIT_ASSERT(src1 == TMP_REG1);
return push_inst(compiler, CNTLZW | RC(flags) | S(src2) | A(dst));
case SLJIT_ADD:
if (flags & ALT_FORM1) {
/* Flags does not set: BIN_IMM_EXTS unnecessary. */
SLJIT_ASSERT(src2 == TMP_REG2);
return push_inst(compiler, ADDI | D(dst) | A(src1) | compiler->imm);
}
if (flags & ALT_FORM2) {
/* Flags does not set: BIN_IMM_EXTS unnecessary. */
SLJIT_ASSERT(src2 == TMP_REG2);
return push_inst(compiler, ADDIS | D(dst) | A(src1) | compiler->imm);
}
if (flags & ALT_FORM3) {
SLJIT_ASSERT(src2 == TMP_REG2);
return push_inst(compiler, ADDIC | D(dst) | A(src1) | compiler->imm);
}
if (flags & ALT_FORM4) {
/* Flags does not set: BIN_IMM_EXTS unnecessary. */
FAIL_IF(push_inst(compiler, ADDI | D(dst) | A(src1) | (compiler->imm & 0xffff)));
return push_inst(compiler, ADDIS | D(dst) | A(dst) | (((compiler->imm >> 16) & 0xffff) + ((compiler->imm >> 15) & 0x1)));
}
if (!(flags & ALT_SET_FLAGS))
return push_inst(compiler, ADD | D(dst) | A(src1) | B(src2));
return push_inst(compiler, ADDC | OERC(ALT_SET_FLAGS) | D(dst) | A(src1) | B(src2));
case SLJIT_ADDC:
if (flags & ALT_FORM1) {
FAIL_IF(push_inst(compiler, MFXER | D(0)));
FAIL_IF(push_inst(compiler, ADDE | D(dst) | A(src1) | B(src2)));
return push_inst(compiler, MTXER | S(0));
}
return push_inst(compiler, ADDE | D(dst) | A(src1) | B(src2));
case SLJIT_SUB:
if (flags & ALT_FORM1) {
/* Flags does not set: BIN_IMM_EXTS unnecessary. */
SLJIT_ASSERT(src2 == TMP_REG2);
return push_inst(compiler, SUBFIC | D(dst) | A(src1) | compiler->imm);
}
if (flags & (ALT_FORM2 | ALT_FORM3)) {
SLJIT_ASSERT(src2 == TMP_REG2);
if (flags & ALT_FORM2)
FAIL_IF(push_inst(compiler, CMPI | CRD(0) | A(src1) | compiler->imm));
if (flags & ALT_FORM3)
return push_inst(compiler, CMPLI | CRD(4) | A(src1) | compiler->imm);
return SLJIT_SUCCESS;
}
if (flags & (ALT_FORM4 | ALT_FORM5)) {
if (flags & ALT_FORM4)
FAIL_IF(push_inst(compiler, CMPL | CRD(4) | A(src1) | B(src2)));
if (flags & ALT_FORM5)
FAIL_IF(push_inst(compiler, CMP | CRD(0) | A(src1) | B(src2)));
return SLJIT_SUCCESS;
}
if (!(flags & ALT_SET_FLAGS))
return push_inst(compiler, SUBF | D(dst) | A(src2) | B(src1));
if (flags & ALT_FORM6)
FAIL_IF(push_inst(compiler, CMPL | CRD(4) | A(src1) | B(src2)));
return push_inst(compiler, SUBFC | OERC(ALT_SET_FLAGS) | D(dst) | A(src2) | B(src1));
case SLJIT_SUBC:
if (flags & ALT_FORM1) {
FAIL_IF(push_inst(compiler, MFXER | D(0)));
FAIL_IF(push_inst(compiler, SUBFE | D(dst) | A(src2) | B(src1)));
return push_inst(compiler, MTXER | S(0));
}
return push_inst(compiler, SUBFE | D(dst) | A(src2) | B(src1));
case SLJIT_MUL:
if (flags & ALT_FORM1) {
SLJIT_ASSERT(src2 == TMP_REG2);
return push_inst(compiler, MULLI | D(dst) | A(src1) | compiler->imm);
}
return push_inst(compiler, MULLW | OERC(flags) | D(dst) | A(src2) | B(src1));
case SLJIT_AND:
if (flags & ALT_FORM1) {
SLJIT_ASSERT(src2 == TMP_REG2);
return push_inst(compiler, ANDI | S(src1) | A(dst) | compiler->imm);
}
if (flags & ALT_FORM2) {
SLJIT_ASSERT(src2 == TMP_REG2);
return push_inst(compiler, ANDIS | S(src1) | A(dst) | compiler->imm);
}
return push_inst(compiler, AND | RC(flags) | S(src1) | A(dst) | B(src2));
case SLJIT_OR:
if (flags & ALT_FORM1) {
SLJIT_ASSERT(src2 == TMP_REG2);
return push_inst(compiler, ORI | S(src1) | A(dst) | compiler->imm);
}
if (flags & ALT_FORM2) {
SLJIT_ASSERT(src2 == TMP_REG2);
return push_inst(compiler, ORIS | S(src1) | A(dst) | compiler->imm);
}
if (flags & ALT_FORM3) {
SLJIT_ASSERT(src2 == TMP_REG2);
FAIL_IF(push_inst(compiler, ORI | S(src1) | A(dst) | IMM(compiler->imm)));
return push_inst(compiler, ORIS | S(dst) | A(dst) | IMM(compiler->imm >> 16));
}
return push_inst(compiler, OR | RC(flags) | S(src1) | A(dst) | B(src2));
case SLJIT_XOR:
if (flags & ALT_FORM1) {
SLJIT_ASSERT(src2 == TMP_REG2);
return push_inst(compiler, XORI | S(src1) | A(dst) | compiler->imm);
}
if (flags & ALT_FORM2) {
SLJIT_ASSERT(src2 == TMP_REG2);
return push_inst(compiler, XORIS | S(src1) | A(dst) | compiler->imm);
}
if (flags & ALT_FORM3) {
SLJIT_ASSERT(src2 == TMP_REG2);
FAIL_IF(push_inst(compiler, XORI | S(src1) | A(dst) | IMM(compiler->imm)));
return push_inst(compiler, XORIS | S(dst) | A(dst) | IMM(compiler->imm >> 16));
}
return push_inst(compiler, XOR | RC(flags) | S(src1) | A(dst) | B(src2));
case SLJIT_SHL:
if (flags & ALT_FORM1) {
SLJIT_ASSERT(src2 == TMP_REG2);
compiler->imm &= 0x1f;
return push_inst(compiler, RLWINM | RC(flags) | S(src1) | A(dst) | (compiler->imm << 11) | ((31 - compiler->imm) << 1));
}
return push_inst(compiler, SLW | RC(flags) | S(src1) | A(dst) | B(src2));
case SLJIT_LSHR:
if (flags & ALT_FORM1) {
SLJIT_ASSERT(src2 == TMP_REG2);
compiler->imm &= 0x1f;
return push_inst(compiler, RLWINM | RC(flags) | S(src1) | A(dst) | (((32 - compiler->imm) & 0x1f) << 11) | (compiler->imm << 6) | (31 << 1));
}
return push_inst(compiler, SRW | RC(flags) | S(src1) | A(dst) | B(src2));
case SLJIT_ASHR:
if (flags & ALT_FORM3)
FAIL_IF(push_inst(compiler, MFXER | D(0)));
if (flags & ALT_FORM1) {
SLJIT_ASSERT(src2 == TMP_REG2);
compiler->imm &= 0x1f;
FAIL_IF(push_inst(compiler, SRAWI | RC(flags) | S(src1) | A(dst) | (compiler->imm << 11)));
}
else
FAIL_IF(push_inst(compiler, SRAW | RC(flags) | S(src1) | A(dst) | B(src2)));
return (flags & ALT_FORM3) ? push_inst(compiler, MTXER | S(0)) : SLJIT_SUCCESS;
}
SLJIT_ASSERT_STOP();
return SLJIT_SUCCESS;
}
static SLJIT_INLINE sljit_si emit_const(struct sljit_compiler *compiler, sljit_si reg, sljit_sw init_value)
{
FAIL_IF(push_inst(compiler, ADDIS | D(reg) | A(0) | IMM(init_value >> 16)));
return push_inst(compiler, ORI | S(reg) | A(reg) | IMM(init_value));
}
SLJIT_API_FUNC_ATTRIBUTE void sljit_set_jump_addr(sljit_uw addr, sljit_uw new_addr)
{
sljit_ins *inst = (sljit_ins*)addr;
inst[0] = (inst[0] & 0xffff0000) | ((new_addr >> 16) & 0xffff);
inst[1] = (inst[1] & 0xffff0000) | (new_addr & 0xffff);
SLJIT_CACHE_FLUSH(inst, inst + 2);
}
SLJIT_API_FUNC_ATTRIBUTE void sljit_set_const(sljit_uw addr, sljit_sw new_constant)
{
sljit_ins *inst = (sljit_ins*)addr;
inst[0] = (inst[0] & 0xffff0000) | ((new_constant >> 16) & 0xffff);
inst[1] = (inst[1] & 0xffff0000) | (new_constant & 0xffff);
SLJIT_CACHE_FLUSH(inst, inst + 2);
}

View File

@ -0,0 +1,421 @@
/*
* Stack-less Just-In-Time compiler
*
* Copyright 2009-2012 Zoltan Herczeg (hzmester@freemail.hu). All rights reserved.
*
* Redistribution and use in source and binary forms, with or without modification, are
* permitted provided that the following conditions are met:
*
* 1. Redistributions of source code must retain the above copyright notice, this list of
* conditions and the following disclaimer.
*
* 2. Redistributions in binary form must reproduce the above copyright notice, this list
* of conditions and the following disclaimer in the documentation and/or other materials
* provided with the distribution.
*
* THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDER(S) AND CONTRIBUTORS ``AS IS'' AND ANY
* EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES
* OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT
* SHALL THE COPYRIGHT HOLDER(S) OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT,
* INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED
* TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR
* BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
* CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN
* ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
*/
/* ppc 64-bit arch dependent functions. */
#if defined(__GNUC__) || (defined(__IBM_GCC_ASM) && __IBM_GCC_ASM)
#define ASM_SLJIT_CLZ(src, dst) \
__asm__ volatile ( "cntlzd %0, %1" : "=r"(dst) : "r"(src) )
#elif defined(__xlc__)
#error "Please enable GCC syntax for inline assembly statements"
#else
#error "Must implement count leading zeroes"
#endif
#define RLDI(dst, src, sh, mb, type) \
(HI(30) | S(src) | A(dst) | ((type) << 2) | (((sh) & 0x1f) << 11) | (((sh) & 0x20) >> 4) | (((mb) & 0x1f) << 6) | ((mb) & 0x20))
#define PUSH_RLDICR(reg, shift) \
push_inst(compiler, RLDI(reg, reg, 63 - shift, shift, 1))
static sljit_si load_immediate(struct sljit_compiler *compiler, sljit_si reg, sljit_sw imm)
{
sljit_uw tmp;
sljit_uw shift;
sljit_uw tmp2;
sljit_uw shift2;
if (imm <= SIMM_MAX && imm >= SIMM_MIN)
return push_inst(compiler, ADDI | D(reg) | A(0) | IMM(imm));
if (!(imm & ~0xffff))
return push_inst(compiler, ORI | S(TMP_ZERO) | A(reg) | IMM(imm));
if (imm <= 0x7fffffffl && imm >= -0x80000000l) {
FAIL_IF(push_inst(compiler, ADDIS | D(reg) | A(0) | IMM(imm >> 16)));
return (imm & 0xffff) ? push_inst(compiler, ORI | S(reg) | A(reg) | IMM(imm)) : SLJIT_SUCCESS;
}
/* Count leading zeroes. */
tmp = (imm >= 0) ? imm : ~imm;
ASM_SLJIT_CLZ(tmp, shift);
SLJIT_ASSERT(shift > 0);
shift--;
tmp = (imm << shift);
if ((tmp & ~0xffff000000000000ul) == 0) {
FAIL_IF(push_inst(compiler, ADDI | D(reg) | A(0) | IMM(tmp >> 48)));
shift += 15;
return PUSH_RLDICR(reg, shift);
}
if ((tmp & ~0xffffffff00000000ul) == 0) {
FAIL_IF(push_inst(compiler, ADDIS | D(reg) | A(0) | IMM(tmp >> 48)));
FAIL_IF(push_inst(compiler, ORI | S(reg) | A(reg) | IMM(tmp >> 32)));
shift += 31;
return PUSH_RLDICR(reg, shift);
}
/* Cut out the 16 bit from immediate. */
shift += 15;
tmp2 = imm & ((1ul << (63 - shift)) - 1);
if (tmp2 <= 0xffff) {
FAIL_IF(push_inst(compiler, ADDI | D(reg) | A(0) | IMM(tmp >> 48)));
FAIL_IF(PUSH_RLDICR(reg, shift));
return push_inst(compiler, ORI | S(reg) | A(reg) | tmp2);
}
if (tmp2 <= 0xffffffff) {
FAIL_IF(push_inst(compiler, ADDI | D(reg) | A(0) | IMM(tmp >> 48)));
FAIL_IF(PUSH_RLDICR(reg, shift));
FAIL_IF(push_inst(compiler, ORIS | S(reg) | A(reg) | (tmp2 >> 16)));
return (imm & 0xffff) ? push_inst(compiler, ORI | S(reg) | A(reg) | IMM(tmp2)) : SLJIT_SUCCESS;
}
ASM_SLJIT_CLZ(tmp2, shift2);
tmp2 <<= shift2;
if ((tmp2 & ~0xffff000000000000ul) == 0) {
FAIL_IF(push_inst(compiler, ADDI | D(reg) | A(0) | IMM(tmp >> 48)));
shift2 += 15;
shift += (63 - shift2);
FAIL_IF(PUSH_RLDICR(reg, shift));
FAIL_IF(push_inst(compiler, ORI | S(reg) | A(reg) | (tmp2 >> 48)));
return PUSH_RLDICR(reg, shift2);
}
/* The general version. */
FAIL_IF(push_inst(compiler, ADDIS | D(reg) | A(0) | IMM(imm >> 48)));
FAIL_IF(push_inst(compiler, ORI | S(reg) | A(reg) | IMM(imm >> 32)));
FAIL_IF(PUSH_RLDICR(reg, 31));
FAIL_IF(push_inst(compiler, ORIS | S(reg) | A(reg) | IMM(imm >> 16)));
return push_inst(compiler, ORI | S(reg) | A(reg) | IMM(imm));
}
/* Simplified mnemonics: clrldi. */
#define INS_CLEAR_LEFT(dst, src, from) \
(RLDICL | S(src) | A(dst) | ((from) << 6) | (1 << 5))
/* Sign extension for integer operations. */
#define UN_EXTS() \
if ((flags & (ALT_SIGN_EXT | REG2_SOURCE)) == (ALT_SIGN_EXT | REG2_SOURCE)) { \
FAIL_IF(push_inst(compiler, EXTSW | S(src2) | A(TMP_REG2))); \
src2 = TMP_REG2; \
}
#define BIN_EXTS() \
if (flags & ALT_SIGN_EXT) { \
if (flags & REG1_SOURCE) { \
FAIL_IF(push_inst(compiler, EXTSW | S(src1) | A(TMP_REG1))); \
src1 = TMP_REG1; \
} \
if (flags & REG2_SOURCE) { \
FAIL_IF(push_inst(compiler, EXTSW | S(src2) | A(TMP_REG2))); \
src2 = TMP_REG2; \
} \
}
#define BIN_IMM_EXTS() \
if ((flags & (ALT_SIGN_EXT | REG1_SOURCE)) == (ALT_SIGN_EXT | REG1_SOURCE)) { \
FAIL_IF(push_inst(compiler, EXTSW | S(src1) | A(TMP_REG1))); \
src1 = TMP_REG1; \
}
static SLJIT_INLINE sljit_si emit_single_op(struct sljit_compiler *compiler, sljit_si op, sljit_si flags,
sljit_si dst, sljit_si src1, sljit_si src2)
{
switch (op) {
case SLJIT_MOV:
case SLJIT_MOV_P:
SLJIT_ASSERT(src1 == TMP_REG1);
if (dst != src2)
return push_inst(compiler, OR | S(src2) | A(dst) | B(src2));
return SLJIT_SUCCESS;
case SLJIT_MOV_UI:
case SLJIT_MOV_SI:
SLJIT_ASSERT(src1 == TMP_REG1);
if ((flags & (REG_DEST | REG2_SOURCE)) == (REG_DEST | REG2_SOURCE)) {
if (op == SLJIT_MOV_SI)
return push_inst(compiler, EXTSW | S(src2) | A(dst));
return push_inst(compiler, INS_CLEAR_LEFT(dst, src2, 0));
}
else {
SLJIT_ASSERT(dst == src2);
}
return SLJIT_SUCCESS;
case SLJIT_MOV_UB:
case SLJIT_MOV_SB:
SLJIT_ASSERT(src1 == TMP_REG1);
if ((flags & (REG_DEST | REG2_SOURCE)) == (REG_DEST | REG2_SOURCE)) {
if (op == SLJIT_MOV_SB)
return push_inst(compiler, EXTSB | S(src2) | A(dst));
return push_inst(compiler, INS_CLEAR_LEFT(dst, src2, 24));
}
else if ((flags & REG_DEST) && op == SLJIT_MOV_SB)
return push_inst(compiler, EXTSB | S(src2) | A(dst));
else {
SLJIT_ASSERT(dst == src2);
}
return SLJIT_SUCCESS;
case SLJIT_MOV_UH:
case SLJIT_MOV_SH:
SLJIT_ASSERT(src1 == TMP_REG1);
if ((flags & (REG_DEST | REG2_SOURCE)) == (REG_DEST | REG2_SOURCE)) {
if (op == SLJIT_MOV_SH)
return push_inst(compiler, EXTSH | S(src2) | A(dst));
return push_inst(compiler, INS_CLEAR_LEFT(dst, src2, 16));
}
else {
SLJIT_ASSERT(dst == src2);
}
return SLJIT_SUCCESS;
case SLJIT_NOT:
SLJIT_ASSERT(src1 == TMP_REG1);
UN_EXTS();
return push_inst(compiler, NOR | RC(flags) | S(src2) | A(dst) | B(src2));
case SLJIT_NEG:
SLJIT_ASSERT(src1 == TMP_REG1);
UN_EXTS();
return push_inst(compiler, NEG | OERC(flags) | D(dst) | A(src2));
case SLJIT_CLZ:
SLJIT_ASSERT(src1 == TMP_REG1);
if (flags & ALT_FORM1)
return push_inst(compiler, CNTLZW | RC(flags) | S(src2) | A(dst));
return push_inst(compiler, CNTLZD | RC(flags) | S(src2) | A(dst));
case SLJIT_ADD:
if (flags & ALT_FORM1) {
/* Flags does not set: BIN_IMM_EXTS unnecessary. */
SLJIT_ASSERT(src2 == TMP_REG2);
return push_inst(compiler, ADDI | D(dst) | A(src1) | compiler->imm);
}
if (flags & ALT_FORM2) {
/* Flags does not set: BIN_IMM_EXTS unnecessary. */
SLJIT_ASSERT(src2 == TMP_REG2);
return push_inst(compiler, ADDIS | D(dst) | A(src1) | compiler->imm);
}
if (flags & ALT_FORM3) {
SLJIT_ASSERT(src2 == TMP_REG2);
BIN_IMM_EXTS();
return push_inst(compiler, ADDIC | D(dst) | A(src1) | compiler->imm);
}
if (flags & ALT_FORM4) {
/* Flags does not set: BIN_IMM_EXTS unnecessary. */
FAIL_IF(push_inst(compiler, ADDI | D(dst) | A(src1) | (compiler->imm & 0xffff)));
return push_inst(compiler, ADDIS | D(dst) | A(dst) | (((compiler->imm >> 16) & 0xffff) + ((compiler->imm >> 15) & 0x1)));
}
if (!(flags & ALT_SET_FLAGS))
return push_inst(compiler, ADD | D(dst) | A(src1) | B(src2));
BIN_EXTS();
return push_inst(compiler, ADDC | OERC(ALT_SET_FLAGS) | D(dst) | A(src1) | B(src2));
case SLJIT_ADDC:
if (flags & ALT_FORM1) {
FAIL_IF(push_inst(compiler, MFXER | D(0)));
FAIL_IF(push_inst(compiler, ADDE | D(dst) | A(src1) | B(src2)));
return push_inst(compiler, MTXER | S(0));
}
BIN_EXTS();
return push_inst(compiler, ADDE | D(dst) | A(src1) | B(src2));
case SLJIT_SUB:
if (flags & ALT_FORM1) {
/* Flags does not set: BIN_IMM_EXTS unnecessary. */
SLJIT_ASSERT(src2 == TMP_REG2);
return push_inst(compiler, SUBFIC | D(dst) | A(src1) | compiler->imm);
}
if (flags & (ALT_FORM2 | ALT_FORM3)) {
SLJIT_ASSERT(src2 == TMP_REG2);
if (flags & ALT_FORM2)
FAIL_IF(push_inst(compiler, CMPI | CRD(0 | ((flags & ALT_SIGN_EXT) ? 0 : 1)) | A(src1) | compiler->imm));
if (flags & ALT_FORM3)
return push_inst(compiler, CMPLI | CRD(4 | ((flags & ALT_SIGN_EXT) ? 0 : 1)) | A(src1) | compiler->imm);
return SLJIT_SUCCESS;
}
if (flags & (ALT_FORM4 | ALT_FORM5)) {
if (flags & ALT_FORM4)
FAIL_IF(push_inst(compiler, CMPL | CRD(4 | ((flags & ALT_SIGN_EXT) ? 0 : 1)) | A(src1) | B(src2)));
if (flags & ALT_FORM5)
return push_inst(compiler, CMP | CRD(0 | ((flags & ALT_SIGN_EXT) ? 0 : 1)) | A(src1) | B(src2));
return SLJIT_SUCCESS;
}
if (!(flags & ALT_SET_FLAGS))
return push_inst(compiler, SUBF | D(dst) | A(src2) | B(src1));
BIN_EXTS();
if (flags & ALT_FORM6)
FAIL_IF(push_inst(compiler, CMPL | CRD(4 | ((flags & ALT_SIGN_EXT) ? 0 : 1)) | A(src1) | B(src2)));
return push_inst(compiler, SUBFC | OERC(ALT_SET_FLAGS) | D(dst) | A(src2) | B(src1));
case SLJIT_SUBC:
if (flags & ALT_FORM1) {
FAIL_IF(push_inst(compiler, MFXER | D(0)));
FAIL_IF(push_inst(compiler, SUBFE | D(dst) | A(src2) | B(src1)));
return push_inst(compiler, MTXER | S(0));
}
BIN_EXTS();
return push_inst(compiler, SUBFE | D(dst) | A(src2) | B(src1));
case SLJIT_MUL:
if (flags & ALT_FORM1) {
SLJIT_ASSERT(src2 == TMP_REG2);
return push_inst(compiler, MULLI | D(dst) | A(src1) | compiler->imm);
}
BIN_EXTS();
if (flags & ALT_FORM2)
return push_inst(compiler, MULLW | OERC(flags) | D(dst) | A(src2) | B(src1));
return push_inst(compiler, MULLD | OERC(flags) | D(dst) | A(src2) | B(src1));
case SLJIT_AND:
if (flags & ALT_FORM1) {
SLJIT_ASSERT(src2 == TMP_REG2);
return push_inst(compiler, ANDI | S(src1) | A(dst) | compiler->imm);
}
if (flags & ALT_FORM2) {
SLJIT_ASSERT(src2 == TMP_REG2);
return push_inst(compiler, ANDIS | S(src1) | A(dst) | compiler->imm);
}
return push_inst(compiler, AND | RC(flags) | S(src1) | A(dst) | B(src2));
case SLJIT_OR:
if (flags & ALT_FORM1) {
SLJIT_ASSERT(src2 == TMP_REG2);
return push_inst(compiler, ORI | S(src1) | A(dst) | compiler->imm);
}
if (flags & ALT_FORM2) {
SLJIT_ASSERT(src2 == TMP_REG2);
return push_inst(compiler, ORIS | S(src1) | A(dst) | compiler->imm);
}
if (flags & ALT_FORM3) {
SLJIT_ASSERT(src2 == TMP_REG2);
FAIL_IF(push_inst(compiler, ORI | S(src1) | A(dst) | IMM(compiler->imm)));
return push_inst(compiler, ORIS | S(dst) | A(dst) | IMM(compiler->imm >> 16));
}
return push_inst(compiler, OR | RC(flags) | S(src1) | A(dst) | B(src2));
case SLJIT_XOR:
if (flags & ALT_FORM1) {
SLJIT_ASSERT(src2 == TMP_REG2);
return push_inst(compiler, XORI | S(src1) | A(dst) | compiler->imm);
}
if (flags & ALT_FORM2) {
SLJIT_ASSERT(src2 == TMP_REG2);
return push_inst(compiler, XORIS | S(src1) | A(dst) | compiler->imm);
}
if (flags & ALT_FORM3) {
SLJIT_ASSERT(src2 == TMP_REG2);
FAIL_IF(push_inst(compiler, XORI | S(src1) | A(dst) | IMM(compiler->imm)));
return push_inst(compiler, XORIS | S(dst) | A(dst) | IMM(compiler->imm >> 16));
}
return push_inst(compiler, XOR | RC(flags) | S(src1) | A(dst) | B(src2));
case SLJIT_SHL:
if (flags & ALT_FORM1) {
SLJIT_ASSERT(src2 == TMP_REG2);
if (flags & ALT_FORM2) {
compiler->imm &= 0x1f;
return push_inst(compiler, RLWINM | RC(flags) | S(src1) | A(dst) | (compiler->imm << 11) | ((31 - compiler->imm) << 1));
}
else {
compiler->imm &= 0x3f;
return push_inst(compiler, RLDI(dst, src1, compiler->imm, 63 - compiler->imm, 1) | RC(flags));
}
}
return push_inst(compiler, ((flags & ALT_FORM2) ? SLW : SLD) | RC(flags) | S(src1) | A(dst) | B(src2));
case SLJIT_LSHR:
if (flags & ALT_FORM1) {
SLJIT_ASSERT(src2 == TMP_REG2);
if (flags & ALT_FORM2) {
compiler->imm &= 0x1f;
return push_inst(compiler, RLWINM | RC(flags) | S(src1) | A(dst) | (((32 - compiler->imm) & 0x1f) << 11) | (compiler->imm << 6) | (31 << 1));
}
else {
compiler->imm &= 0x3f;
return push_inst(compiler, RLDI(dst, src1, 64 - compiler->imm, compiler->imm, 0) | RC(flags));
}
}
return push_inst(compiler, ((flags & ALT_FORM2) ? SRW : SRD) | RC(flags) | S(src1) | A(dst) | B(src2));
case SLJIT_ASHR:
if (flags & ALT_FORM3)
FAIL_IF(push_inst(compiler, MFXER | D(0)));
if (flags & ALT_FORM1) {
SLJIT_ASSERT(src2 == TMP_REG2);
if (flags & ALT_FORM2) {
compiler->imm &= 0x1f;
FAIL_IF(push_inst(compiler, SRAWI | RC(flags) | S(src1) | A(dst) | (compiler->imm << 11)));
}
else {
compiler->imm &= 0x3f;
FAIL_IF(push_inst(compiler, SRADI | RC(flags) | S(src1) | A(dst) | ((compiler->imm & 0x1f) << 11) | ((compiler->imm & 0x20) >> 4)));
}
}
else
FAIL_IF(push_inst(compiler, ((flags & ALT_FORM2) ? SRAW : SRAD) | RC(flags) | S(src1) | A(dst) | B(src2)));
return (flags & ALT_FORM3) ? push_inst(compiler, MTXER | S(0)) : SLJIT_SUCCESS;
}
SLJIT_ASSERT_STOP();
return SLJIT_SUCCESS;
}
static SLJIT_INLINE sljit_si emit_const(struct sljit_compiler *compiler, sljit_si reg, sljit_sw init_value)
{
FAIL_IF(push_inst(compiler, ADDIS | D(reg) | A(0) | IMM(init_value >> 48)));
FAIL_IF(push_inst(compiler, ORI | S(reg) | A(reg) | IMM(init_value >> 32)));
FAIL_IF(PUSH_RLDICR(reg, 31));
FAIL_IF(push_inst(compiler, ORIS | S(reg) | A(reg) | IMM(init_value >> 16)));
return push_inst(compiler, ORI | S(reg) | A(reg) | IMM(init_value));
}
SLJIT_API_FUNC_ATTRIBUTE void sljit_set_jump_addr(sljit_uw addr, sljit_uw new_addr)
{
sljit_ins *inst = (sljit_ins*)addr;
inst[0] = (inst[0] & 0xffff0000) | ((new_addr >> 48) & 0xffff);
inst[1] = (inst[1] & 0xffff0000) | ((new_addr >> 32) & 0xffff);
inst[3] = (inst[3] & 0xffff0000) | ((new_addr >> 16) & 0xffff);
inst[4] = (inst[4] & 0xffff0000) | (new_addr & 0xffff);
SLJIT_CACHE_FLUSH(inst, inst + 5);
}
SLJIT_API_FUNC_ATTRIBUTE void sljit_set_const(sljit_uw addr, sljit_sw new_constant)
{
sljit_ins *inst = (sljit_ins*)addr;
inst[0] = (inst[0] & 0xffff0000) | ((new_constant >> 48) & 0xffff);
inst[1] = (inst[1] & 0xffff0000) | ((new_constant >> 32) & 0xffff);
inst[3] = (inst[3] & 0xffff0000) | ((new_constant >> 16) & 0xffff);
inst[4] = (inst[4] & 0xffff0000) | (new_constant & 0xffff);
SLJIT_CACHE_FLUSH(inst, inst + 5);
}

File diff suppressed because it is too large Load Diff

View File

@ -0,0 +1,164 @@
/*
* Stack-less Just-In-Time compiler
*
* Copyright 2009-2012 Zoltan Herczeg (hzmester@freemail.hu). All rights reserved.
*
* Redistribution and use in source and binary forms, with or without modification, are
* permitted provided that the following conditions are met:
*
* 1. Redistributions of source code must retain the above copyright notice, this list of
* conditions and the following disclaimer.
*
* 2. Redistributions in binary form must reproduce the above copyright notice, this list
* of conditions and the following disclaimer in the documentation and/or other materials
* provided with the distribution.
*
* THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDER(S) AND CONTRIBUTORS ``AS IS'' AND ANY
* EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES
* OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT
* SHALL THE COPYRIGHT HOLDER(S) OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT,
* INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED
* TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR
* BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
* CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN
* ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
*/
static sljit_si load_immediate(struct sljit_compiler *compiler, sljit_si dst, sljit_sw imm)
{
if (imm <= SIMM_MAX && imm >= SIMM_MIN)
return push_inst(compiler, OR | D(dst) | S1(0) | IMM(imm), DR(dst));
FAIL_IF(push_inst(compiler, SETHI | D(dst) | ((imm >> 10) & 0x3fffff), DR(dst)));
return (imm & 0x3ff) ? push_inst(compiler, OR | D(dst) | S1(dst) | IMM_ARG | (imm & 0x3ff), DR(dst)) : SLJIT_SUCCESS;
}
#define ARG2(flags, src2) ((flags & SRC2_IMM) ? IMM(src2) : S2(src2))
static SLJIT_INLINE sljit_si emit_single_op(struct sljit_compiler *compiler, sljit_si op, sljit_si flags,
sljit_si dst, sljit_si src1, sljit_sw src2)
{
SLJIT_COMPILE_ASSERT(ICC_IS_SET == SET_FLAGS, icc_is_set_and_set_flags_must_be_the_same);
switch (op) {
case SLJIT_MOV:
case SLJIT_MOV_UI:
case SLJIT_MOV_SI:
case SLJIT_MOV_P:
SLJIT_ASSERT(src1 == TMP_REG1 && !(flags & SRC2_IMM));
if (dst != src2)
return push_inst(compiler, OR | D(dst) | S1(0) | S2(src2), DR(dst));
return SLJIT_SUCCESS;
case SLJIT_MOV_UB:
case SLJIT_MOV_SB:
SLJIT_ASSERT(src1 == TMP_REG1 && !(flags & SRC2_IMM));
if ((flags & (REG_DEST | REG2_SOURCE)) == (REG_DEST | REG2_SOURCE)) {
if (op == SLJIT_MOV_UB)
return push_inst(compiler, AND | D(dst) | S1(src2) | IMM(0xff), DR(dst));
FAIL_IF(push_inst(compiler, SLL | D(dst) | S1(src2) | IMM(24), DR(dst)));
return push_inst(compiler, SRA | D(dst) | S1(dst) | IMM(24), DR(dst));
}
else if (dst != src2)
SLJIT_ASSERT_STOP();
return SLJIT_SUCCESS;
case SLJIT_MOV_UH:
case SLJIT_MOV_SH:
SLJIT_ASSERT(src1 == TMP_REG1 && !(flags & SRC2_IMM));
if ((flags & (REG_DEST | REG2_SOURCE)) == (REG_DEST | REG2_SOURCE)) {
FAIL_IF(push_inst(compiler, SLL | D(dst) | S1(src2) | IMM(16), DR(dst)));
return push_inst(compiler, (op == SLJIT_MOV_SH ? SRA : SRL) | D(dst) | S1(dst) | IMM(16), DR(dst));
}
else if (dst != src2)
SLJIT_ASSERT_STOP();
return SLJIT_SUCCESS;
case SLJIT_NOT:
SLJIT_ASSERT(src1 == TMP_REG1 && !(flags & SRC2_IMM));
return push_inst(compiler, XNOR | (flags & SET_FLAGS) | D(dst) | S1(0) | S2(src2), DR(dst) | (flags & SET_FLAGS));
case SLJIT_CLZ:
SLJIT_ASSERT(src1 == TMP_REG1 && !(flags & SRC2_IMM));
/* sparc 32 does not support SLJIT_KEEP_FLAGS. Not sure I can fix this. */
FAIL_IF(push_inst(compiler, SUB | SET_FLAGS | D(0) | S1(src2) | S2(0), SET_FLAGS));
FAIL_IF(push_inst(compiler, OR | D(TMP_REG1) | S1(0) | S2(src2), DR(TMP_REG1)));
FAIL_IF(push_inst(compiler, BICC | DA(0x1) | (7 & DISP_MASK), UNMOVABLE_INS));
FAIL_IF(push_inst(compiler, OR | (flags & SET_FLAGS) | D(dst) | S1(0) | IMM(32), UNMOVABLE_INS | (flags & SET_FLAGS)));
FAIL_IF(push_inst(compiler, OR | D(dst) | S1(0) | IMM(-1), DR(dst)));
/* Loop. */
FAIL_IF(push_inst(compiler, SUB | SET_FLAGS | D(0) | S1(TMP_REG1) | S2(0), SET_FLAGS));
FAIL_IF(push_inst(compiler, SLL | D(TMP_REG1) | S1(TMP_REG1) | IMM(1), DR(TMP_REG1)));
FAIL_IF(push_inst(compiler, BICC | DA(0xe) | (-2 & DISP_MASK), UNMOVABLE_INS));
return push_inst(compiler, ADD | (flags & SET_FLAGS) | D(dst) | S1(dst) | IMM(1), UNMOVABLE_INS | (flags & SET_FLAGS));
case SLJIT_ADD:
return push_inst(compiler, ADD | (flags & SET_FLAGS) | D(dst) | S1(src1) | ARG2(flags, src2), DR(dst) | (flags & SET_FLAGS));
case SLJIT_ADDC:
return push_inst(compiler, ADDC | (flags & SET_FLAGS) | D(dst) | S1(src1) | ARG2(flags, src2), DR(dst) | (flags & SET_FLAGS));
case SLJIT_SUB:
return push_inst(compiler, SUB | (flags & SET_FLAGS) | D(dst) | S1(src1) | ARG2(flags, src2), DR(dst) | (flags & SET_FLAGS));
case SLJIT_SUBC:
return push_inst(compiler, SUBC | (flags & SET_FLAGS) | D(dst) | S1(src1) | ARG2(flags, src2), DR(dst) | (flags & SET_FLAGS));
case SLJIT_MUL:
FAIL_IF(push_inst(compiler, SMUL | D(dst) | S1(src1) | ARG2(flags, src2), DR(dst)));
if (!(flags & SET_FLAGS))
return SLJIT_SUCCESS;
FAIL_IF(push_inst(compiler, SRA | D(TMP_REG1) | S1(dst) | IMM(31), DR(TMP_REG1)));
FAIL_IF(push_inst(compiler, RDY | D(TMP_LINK), DR(TMP_LINK)));
return push_inst(compiler, SUB | SET_FLAGS | D(0) | S1(TMP_REG1) | S2(TMP_LINK), MOVABLE_INS | SET_FLAGS);
case SLJIT_AND:
return push_inst(compiler, AND | (flags & SET_FLAGS) | D(dst) | S1(src1) | ARG2(flags, src2), DR(dst) | (flags & SET_FLAGS));
case SLJIT_OR:
return push_inst(compiler, OR | (flags & SET_FLAGS) | D(dst) | S1(src1) | ARG2(flags, src2), DR(dst) | (flags & SET_FLAGS));
case SLJIT_XOR:
return push_inst(compiler, XOR | (flags & SET_FLAGS) | D(dst) | S1(src1) | ARG2(flags, src2), DR(dst) | (flags & SET_FLAGS));
case SLJIT_SHL:
FAIL_IF(push_inst(compiler, SLL | D(dst) | S1(src1) | ARG2(flags, src2), DR(dst)));
return !(flags & SET_FLAGS) ? SLJIT_SUCCESS : push_inst(compiler, SUB | SET_FLAGS | D(0) | S1(dst) | S2(0), SET_FLAGS);
case SLJIT_LSHR:
FAIL_IF(push_inst(compiler, SRL | D(dst) | S1(src1) | ARG2(flags, src2), DR(dst)));
return !(flags & SET_FLAGS) ? SLJIT_SUCCESS : push_inst(compiler, SUB | SET_FLAGS | D(0) | S1(dst) | S2(0), SET_FLAGS);
case SLJIT_ASHR:
FAIL_IF(push_inst(compiler, SRA | D(dst) | S1(src1) | ARG2(flags, src2), DR(dst)));
return !(flags & SET_FLAGS) ? SLJIT_SUCCESS : push_inst(compiler, SUB | SET_FLAGS | D(0) | S1(dst) | S2(0), SET_FLAGS);
}
SLJIT_ASSERT_STOP();
return SLJIT_SUCCESS;
}
static SLJIT_INLINE sljit_si emit_const(struct sljit_compiler *compiler, sljit_si dst, sljit_sw init_value)
{
FAIL_IF(push_inst(compiler, SETHI | D(dst) | ((init_value >> 10) & 0x3fffff), DR(dst)));
return push_inst(compiler, OR | D(dst) | S1(dst) | IMM_ARG | (init_value & 0x3ff), DR(dst));
}
SLJIT_API_FUNC_ATTRIBUTE void sljit_set_jump_addr(sljit_uw addr, sljit_uw new_addr)
{
sljit_ins *inst = (sljit_ins*)addr;
inst[0] = (inst[0] & 0xffc00000) | ((new_addr >> 10) & 0x3fffff);
inst[1] = (inst[1] & 0xfffffc00) | (new_addr & 0x3ff);
SLJIT_CACHE_FLUSH(inst, inst + 2);
}
SLJIT_API_FUNC_ATTRIBUTE void sljit_set_const(sljit_uw addr, sljit_sw new_constant)
{
sljit_ins *inst = (sljit_ins*)addr;
inst[0] = (inst[0] & 0xffc00000) | ((new_constant >> 10) & 0x3fffff);
inst[1] = (inst[1] & 0xfffffc00) | (new_constant & 0x3ff);
SLJIT_CACHE_FLUSH(inst, inst + 2);
}

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

View File

@ -0,0 +1,550 @@
/*
* Stack-less Just-In-Time compiler
*
* Copyright 2009-2012 Zoltan Herczeg (hzmester@freemail.hu). All rights reserved.
*
* Redistribution and use in source and binary forms, with or without modification, are
* permitted provided that the following conditions are met:
*
* 1. Redistributions of source code must retain the above copyright notice, this list of
* conditions and the following disclaimer.
*
* 2. Redistributions in binary form must reproduce the above copyright notice, this list
* of conditions and the following disclaimer in the documentation and/or other materials
* provided with the distribution.
*
* THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDER(S) AND CONTRIBUTORS ``AS IS'' AND ANY
* EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES
* OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT
* SHALL THE COPYRIGHT HOLDER(S) OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT,
* INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED
* TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR
* BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
* CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN
* ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
*/
/* x86 32-bit arch dependent functions. */
static sljit_si emit_do_imm(struct sljit_compiler *compiler, sljit_ub opcode, sljit_sw imm)
{
sljit_ub *inst;
inst = (sljit_ub*)ensure_buf(compiler, 1 + 1 + sizeof(sljit_sw));
FAIL_IF(!inst);
INC_SIZE(1 + sizeof(sljit_sw));
*inst++ = opcode;
*(sljit_sw*)inst = imm;
return SLJIT_SUCCESS;
}
static sljit_ub* generate_far_jump_code(struct sljit_jump *jump, sljit_ub *code_ptr, sljit_si type)
{
if (type == SLJIT_JUMP) {
*code_ptr++ = JMP_i32;
jump->addr++;
}
else if (type >= SLJIT_FAST_CALL) {
*code_ptr++ = CALL_i32;
jump->addr++;
}
else {
*code_ptr++ = GROUP_0F;
*code_ptr++ = get_jump_code(type);
jump->addr += 2;
}
if (jump->flags & JUMP_LABEL)
jump->flags |= PATCH_MW;
else
*(sljit_sw*)code_ptr = jump->u.target - (jump->addr + 4);
code_ptr += 4;
return code_ptr;
}
SLJIT_API_FUNC_ATTRIBUTE sljit_si sljit_emit_enter(struct sljit_compiler *compiler,
sljit_si options, sljit_si args, sljit_si scratches, sljit_si saveds,
sljit_si fscratches, sljit_si fsaveds, sljit_si local_size)
{
sljit_si size;
sljit_ub *inst;
CHECK_ERROR();
CHECK(check_sljit_emit_enter(compiler, options, args, scratches, saveds, fscratches, fsaveds, local_size));
set_emit_enter(compiler, options, args, scratches, saveds, fscratches, fsaveds, local_size);
compiler->args = args;
compiler->flags_saved = 0;
size = 1 + (scratches > 7 ? (scratches - 7) : 0) + (saveds <= 3 ? saveds : 3);
#if (defined SLJIT_X86_32_FASTCALL && SLJIT_X86_32_FASTCALL)
size += (args > 0 ? (args * 2) : 0) + (args > 2 ? 2 : 0);
#else
size += (args > 0 ? (2 + args * 3) : 0);
#endif
inst = (sljit_ub*)ensure_buf(compiler, 1 + size);
FAIL_IF(!inst);
INC_SIZE(size);
PUSH_REG(reg_map[TMP_REG1]);
#if !(defined SLJIT_X86_32_FASTCALL && SLJIT_X86_32_FASTCALL)
if (args > 0) {
*inst++ = MOV_r_rm;
*inst++ = MOD_REG | (reg_map[TMP_REG1] << 3) | 0x4 /* esp */;
}
#endif
if (saveds > 2 || scratches > 7)
PUSH_REG(reg_map[SLJIT_S2]);
if (saveds > 1 || scratches > 8)
PUSH_REG(reg_map[SLJIT_S1]);
if (saveds > 0 || scratches > 9)
PUSH_REG(reg_map[SLJIT_S0]);
#if (defined SLJIT_X86_32_FASTCALL && SLJIT_X86_32_FASTCALL)
if (args > 0) {
*inst++ = MOV_r_rm;
*inst++ = MOD_REG | (reg_map[SLJIT_S0] << 3) | reg_map[SLJIT_R2];
}
if (args > 1) {
*inst++ = MOV_r_rm;
*inst++ = MOD_REG | (reg_map[SLJIT_S1] << 3) | reg_map[SLJIT_R1];
}
if (args > 2) {
*inst++ = MOV_r_rm;
*inst++ = MOD_DISP8 | (reg_map[SLJIT_S2] << 3) | 0x4 /* esp */;
*inst++ = 0x24;
*inst++ = sizeof(sljit_sw) * (3 + 2); /* saveds >= 3 as well. */
}
#else
if (args > 0) {
*inst++ = MOV_r_rm;
*inst++ = MOD_DISP8 | (reg_map[SLJIT_S0] << 3) | reg_map[TMP_REG1];
*inst++ = sizeof(sljit_sw) * 2;
}
if (args > 1) {
*inst++ = MOV_r_rm;
*inst++ = MOD_DISP8 | (reg_map[SLJIT_S1] << 3) | reg_map[TMP_REG1];
*inst++ = sizeof(sljit_sw) * 3;
}
if (args > 2) {
*inst++ = MOV_r_rm;
*inst++ = MOD_DISP8 | (reg_map[SLJIT_S2] << 3) | reg_map[TMP_REG1];
*inst++ = sizeof(sljit_sw) * 4;
}
#endif
SLJIT_COMPILE_ASSERT(SLJIT_LOCALS_OFFSET >= (2 + 4) * sizeof(sljit_uw), require_at_least_two_words);
#if defined(__APPLE__)
/* Ignore pushed registers and SLJIT_LOCALS_OFFSET when computing the aligned local size. */
saveds = (2 + (scratches > 7 ? (scratches - 7) : 0) + (saveds <= 3 ? saveds : 3)) * sizeof(sljit_uw);
local_size = ((SLJIT_LOCALS_OFFSET + saveds + local_size + 15) & ~15) - saveds;
#else
if (options & SLJIT_DOUBLE_ALIGNMENT) {
local_size = SLJIT_LOCALS_OFFSET + ((local_size + 7) & ~7);
inst = (sljit_ub*)ensure_buf(compiler, 1 + 17);
FAIL_IF(!inst);
INC_SIZE(17);
inst[0] = MOV_r_rm;
inst[1] = MOD_REG | (reg_map[TMP_REG1] << 3) | reg_map[SLJIT_SP];
inst[2] = GROUP_F7;
inst[3] = MOD_REG | (0 << 3) | reg_map[SLJIT_SP];
*(sljit_sw*)(inst + 4) = 0x4;
inst[8] = JNE_i8;
inst[9] = 6;
inst[10] = GROUP_BINARY_81;
inst[11] = MOD_REG | (5 << 3) | reg_map[SLJIT_SP];
*(sljit_sw*)(inst + 12) = 0x4;
inst[16] = PUSH_r + reg_map[TMP_REG1];
}
else
local_size = SLJIT_LOCALS_OFFSET + ((local_size + 3) & ~3);
#endif
compiler->local_size = local_size;
#ifdef _WIN32
if (local_size > 1024) {
#if (defined SLJIT_X86_32_FASTCALL && SLJIT_X86_32_FASTCALL)
FAIL_IF(emit_do_imm(compiler, MOV_r_i32 + reg_map[SLJIT_R0], local_size));
#else
local_size -= SLJIT_LOCALS_OFFSET;
FAIL_IF(emit_do_imm(compiler, MOV_r_i32 + reg_map[SLJIT_R0], local_size));
FAIL_IF(emit_non_cum_binary(compiler, SUB_r_rm, SUB_rm_r, SUB, SUB_EAX_i32,
SLJIT_SP, 0, SLJIT_SP, 0, SLJIT_IMM, SLJIT_LOCALS_OFFSET));
#endif
FAIL_IF(sljit_emit_ijump(compiler, SLJIT_CALL1, SLJIT_IMM, SLJIT_FUNC_OFFSET(sljit_grow_stack)));
}
#endif
SLJIT_ASSERT(local_size > 0);
return emit_non_cum_binary(compiler, SUB_r_rm, SUB_rm_r, SUB, SUB_EAX_i32,
SLJIT_SP, 0, SLJIT_SP, 0, SLJIT_IMM, local_size);
}
SLJIT_API_FUNC_ATTRIBUTE sljit_si sljit_set_context(struct sljit_compiler *compiler,
sljit_si options, sljit_si args, sljit_si scratches, sljit_si saveds,
sljit_si fscratches, sljit_si fsaveds, sljit_si local_size)
{
CHECK_ERROR();
CHECK(check_sljit_set_context(compiler, options, args, scratches, saveds, fscratches, fsaveds, local_size));
set_set_context(compiler, options, args, scratches, saveds, fscratches, fsaveds, local_size);
compiler->args = args;
#if defined(__APPLE__)
saveds = (2 + (scratches > 7 ? (scratches - 7) : 0) + (saveds <= 3 ? saveds : 3)) * sizeof(sljit_uw);
compiler->local_size = ((SLJIT_LOCALS_OFFSET + saveds + local_size + 15) & ~15) - saveds;
#else
if (options & SLJIT_DOUBLE_ALIGNMENT)
compiler->local_size = SLJIT_LOCALS_OFFSET + ((local_size + 7) & ~7);
else
compiler->local_size = SLJIT_LOCALS_OFFSET + ((local_size + 3) & ~3);
#endif
return SLJIT_SUCCESS;
}
SLJIT_API_FUNC_ATTRIBUTE sljit_si sljit_emit_return(struct sljit_compiler *compiler, sljit_si op, sljit_si src, sljit_sw srcw)
{
sljit_si size;
sljit_ub *inst;
CHECK_ERROR();
CHECK(check_sljit_emit_return(compiler, op, src, srcw));
SLJIT_ASSERT(compiler->args >= 0);
compiler->flags_saved = 0;
FAIL_IF(emit_mov_before_return(compiler, op, src, srcw));
SLJIT_ASSERT(compiler->local_size > 0);
FAIL_IF(emit_cum_binary(compiler, ADD_r_rm, ADD_rm_r, ADD, ADD_EAX_i32,
SLJIT_SP, 0, SLJIT_SP, 0, SLJIT_IMM, compiler->local_size));
#if !defined(__APPLE__)
if (compiler->options & SLJIT_DOUBLE_ALIGNMENT) {
inst = (sljit_ub*)ensure_buf(compiler, 1 + 3);
FAIL_IF(!inst);
INC_SIZE(3);
inst[0] = MOV_r_rm;
inst[1] = (reg_map[SLJIT_SP] << 3) | 0x4 /* SIB */;
inst[2] = (4 << 3) | reg_map[SLJIT_SP];
}
#endif
size = 2 + (compiler->scratches > 7 ? (compiler->scratches - 7) : 0) +
(compiler->saveds <= 3 ? compiler->saveds : 3);
#if (defined SLJIT_X86_32_FASTCALL && SLJIT_X86_32_FASTCALL)
if (compiler->args > 2)
size += 2;
#else
if (compiler->args > 0)
size += 2;
#endif
inst = (sljit_ub*)ensure_buf(compiler, 1 + size);
FAIL_IF(!inst);
INC_SIZE(size);
if (compiler->saveds > 0 || compiler->scratches > 9)
POP_REG(reg_map[SLJIT_S0]);
if (compiler->saveds > 1 || compiler->scratches > 8)
POP_REG(reg_map[SLJIT_S1]);
if (compiler->saveds > 2 || compiler->scratches > 7)
POP_REG(reg_map[SLJIT_S2]);
POP_REG(reg_map[TMP_REG1]);
#if (defined SLJIT_X86_32_FASTCALL && SLJIT_X86_32_FASTCALL)
if (compiler->args > 2)
RET_I16(sizeof(sljit_sw));
else
RET();
#else
RET();
#endif
return SLJIT_SUCCESS;
}
/* --------------------------------------------------------------------- */
/* Operators */
/* --------------------------------------------------------------------- */
/* Size contains the flags as well. */
static sljit_ub* emit_x86_instruction(struct sljit_compiler *compiler, sljit_si size,
/* The register or immediate operand. */
sljit_si a, sljit_sw imma,
/* The general operand (not immediate). */
sljit_si b, sljit_sw immb)
{
sljit_ub *inst;
sljit_ub *buf_ptr;
sljit_si flags = size & ~0xf;
sljit_si inst_size;
/* Both cannot be switched on. */
SLJIT_ASSERT((flags & (EX86_BIN_INS | EX86_SHIFT_INS)) != (EX86_BIN_INS | EX86_SHIFT_INS));
/* Size flags not allowed for typed instructions. */
SLJIT_ASSERT(!(flags & (EX86_BIN_INS | EX86_SHIFT_INS)) || (flags & (EX86_BYTE_ARG | EX86_HALF_ARG)) == 0);
/* Both size flags cannot be switched on. */
SLJIT_ASSERT((flags & (EX86_BYTE_ARG | EX86_HALF_ARG)) != (EX86_BYTE_ARG | EX86_HALF_ARG));
/* SSE2 and immediate is not possible. */
SLJIT_ASSERT(!(a & SLJIT_IMM) || !(flags & EX86_SSE2));
SLJIT_ASSERT((flags & (EX86_PREF_F2 | EX86_PREF_F3)) != (EX86_PREF_F2 | EX86_PREF_F3)
&& (flags & (EX86_PREF_F2 | EX86_PREF_66)) != (EX86_PREF_F2 | EX86_PREF_66)
&& (flags & (EX86_PREF_F3 | EX86_PREF_66)) != (EX86_PREF_F3 | EX86_PREF_66));
size &= 0xf;
inst_size = size;
if (flags & (EX86_PREF_F2 | EX86_PREF_F3))
inst_size++;
if (flags & EX86_PREF_66)
inst_size++;
/* Calculate size of b. */
inst_size += 1; /* mod r/m byte. */
if (b & SLJIT_MEM) {
if ((b & REG_MASK) == SLJIT_UNUSED)
inst_size += sizeof(sljit_sw);
else if (immb != 0 && !(b & OFFS_REG_MASK)) {
/* Immediate operand. */
if (immb <= 127 && immb >= -128)
inst_size += sizeof(sljit_sb);
else
inst_size += sizeof(sljit_sw);
}
if ((b & REG_MASK) == SLJIT_SP && !(b & OFFS_REG_MASK))
b |= TO_OFFS_REG(SLJIT_SP);
if ((b & OFFS_REG_MASK) != SLJIT_UNUSED)
inst_size += 1; /* SIB byte. */
}
/* Calculate size of a. */
if (a & SLJIT_IMM) {
if (flags & EX86_BIN_INS) {
if (imma <= 127 && imma >= -128) {
inst_size += 1;
flags |= EX86_BYTE_ARG;
} else
inst_size += 4;
}
else if (flags & EX86_SHIFT_INS) {
imma &= 0x1f;
if (imma != 1) {
inst_size ++;
flags |= EX86_BYTE_ARG;
}
} else if (flags & EX86_BYTE_ARG)
inst_size++;
else if (flags & EX86_HALF_ARG)
inst_size += sizeof(short);
else
inst_size += sizeof(sljit_sw);
}
else
SLJIT_ASSERT(!(flags & EX86_SHIFT_INS) || a == SLJIT_PREF_SHIFT_REG);
inst = (sljit_ub*)ensure_buf(compiler, 1 + inst_size);
PTR_FAIL_IF(!inst);
/* Encoding the byte. */
INC_SIZE(inst_size);
if (flags & EX86_PREF_F2)
*inst++ = 0xf2;
if (flags & EX86_PREF_F3)
*inst++ = 0xf3;
if (flags & EX86_PREF_66)
*inst++ = 0x66;
buf_ptr = inst + size;
/* Encode mod/rm byte. */
if (!(flags & EX86_SHIFT_INS)) {
if ((flags & EX86_BIN_INS) && (a & SLJIT_IMM))
*inst = (flags & EX86_BYTE_ARG) ? GROUP_BINARY_83 : GROUP_BINARY_81;
if ((a & SLJIT_IMM) || (a == 0))
*buf_ptr = 0;
else if (!(flags & EX86_SSE2_OP1))
*buf_ptr = reg_map[a] << 3;
else
*buf_ptr = a << 3;
}
else {
if (a & SLJIT_IMM) {
if (imma == 1)
*inst = GROUP_SHIFT_1;
else
*inst = GROUP_SHIFT_N;
} else
*inst = GROUP_SHIFT_CL;
*buf_ptr = 0;
}
if (!(b & SLJIT_MEM))
*buf_ptr++ |= MOD_REG + ((!(flags & EX86_SSE2_OP2)) ? reg_map[b] : b);
else if ((b & REG_MASK) != SLJIT_UNUSED) {
if ((b & OFFS_REG_MASK) == SLJIT_UNUSED || (b & OFFS_REG_MASK) == TO_OFFS_REG(SLJIT_SP)) {
if (immb != 0) {
if (immb <= 127 && immb >= -128)
*buf_ptr |= 0x40;
else
*buf_ptr |= 0x80;
}
if ((b & OFFS_REG_MASK) == SLJIT_UNUSED)
*buf_ptr++ |= reg_map[b & REG_MASK];
else {
*buf_ptr++ |= 0x04;
*buf_ptr++ = reg_map[b & REG_MASK] | (reg_map[OFFS_REG(b)] << 3);
}
if (immb != 0) {
if (immb <= 127 && immb >= -128)
*buf_ptr++ = immb; /* 8 bit displacement. */
else {
*(sljit_sw*)buf_ptr = immb; /* 32 bit displacement. */
buf_ptr += sizeof(sljit_sw);
}
}
}
else {
*buf_ptr++ |= 0x04;
*buf_ptr++ = reg_map[b & REG_MASK] | (reg_map[OFFS_REG(b)] << 3) | (immb << 6);
}
}
else {
*buf_ptr++ |= 0x05;
*(sljit_sw*)buf_ptr = immb; /* 32 bit displacement. */
buf_ptr += sizeof(sljit_sw);
}
if (a & SLJIT_IMM) {
if (flags & EX86_BYTE_ARG)
*buf_ptr = imma;
else if (flags & EX86_HALF_ARG)
*(short*)buf_ptr = imma;
else if (!(flags & EX86_SHIFT_INS))
*(sljit_sw*)buf_ptr = imma;
}
return !(flags & EX86_SHIFT_INS) ? inst : (inst + 1);
}
/* --------------------------------------------------------------------- */
/* Call / return instructions */
/* --------------------------------------------------------------------- */
static SLJIT_INLINE sljit_si call_with_args(struct sljit_compiler *compiler, sljit_si type)
{
sljit_ub *inst;
#if (defined SLJIT_X86_32_FASTCALL && SLJIT_X86_32_FASTCALL)
inst = (sljit_ub*)ensure_buf(compiler, type >= SLJIT_CALL3 ? 1 + 2 + 1 : 1 + 2);
FAIL_IF(!inst);
INC_SIZE(type >= SLJIT_CALL3 ? 2 + 1 : 2);
if (type >= SLJIT_CALL3)
PUSH_REG(reg_map[SLJIT_R2]);
*inst++ = MOV_r_rm;
*inst++ = MOD_REG | (reg_map[SLJIT_R2] << 3) | reg_map[SLJIT_R0];
#else
inst = (sljit_ub*)ensure_buf(compiler, 1 + 4 * (type - SLJIT_CALL0));
FAIL_IF(!inst);
INC_SIZE(4 * (type - SLJIT_CALL0));
*inst++ = MOV_rm_r;
*inst++ = MOD_DISP8 | (reg_map[SLJIT_R0] << 3) | 0x4 /* SIB */;
*inst++ = (0x4 /* none*/ << 3) | reg_map[SLJIT_SP];
*inst++ = 0;
if (type >= SLJIT_CALL2) {
*inst++ = MOV_rm_r;
*inst++ = MOD_DISP8 | (reg_map[SLJIT_R1] << 3) | 0x4 /* SIB */;
*inst++ = (0x4 /* none*/ << 3) | reg_map[SLJIT_SP];
*inst++ = sizeof(sljit_sw);
}
if (type >= SLJIT_CALL3) {
*inst++ = MOV_rm_r;
*inst++ = MOD_DISP8 | (reg_map[SLJIT_R2] << 3) | 0x4 /* SIB */;
*inst++ = (0x4 /* none*/ << 3) | reg_map[SLJIT_SP];
*inst++ = 2 * sizeof(sljit_sw);
}
#endif
return SLJIT_SUCCESS;
}
SLJIT_API_FUNC_ATTRIBUTE sljit_si sljit_emit_fast_enter(struct sljit_compiler *compiler, sljit_si dst, sljit_sw dstw)
{
sljit_ub *inst;
CHECK_ERROR();
CHECK(check_sljit_emit_fast_enter(compiler, dst, dstw));
ADJUST_LOCAL_OFFSET(dst, dstw);
CHECK_EXTRA_REGS(dst, dstw, (void)0);
/* For UNUSED dst. Uncommon, but possible. */
if (dst == SLJIT_UNUSED)
dst = TMP_REG1;
if (FAST_IS_REG(dst)) {
/* Unused dest is possible here. */
inst = (sljit_ub*)ensure_buf(compiler, 1 + 1);
FAIL_IF(!inst);
INC_SIZE(1);
POP_REG(reg_map[dst]);
return SLJIT_SUCCESS;
}
/* Memory. */
inst = emit_x86_instruction(compiler, 1, 0, 0, dst, dstw);
FAIL_IF(!inst);
*inst++ = POP_rm;
return SLJIT_SUCCESS;
}
SLJIT_API_FUNC_ATTRIBUTE sljit_si sljit_emit_fast_return(struct sljit_compiler *compiler, sljit_si src, sljit_sw srcw)
{
sljit_ub *inst;
CHECK_ERROR();
CHECK(check_sljit_emit_fast_return(compiler, src, srcw));
ADJUST_LOCAL_OFFSET(src, srcw);
CHECK_EXTRA_REGS(src, srcw, (void)0);
if (FAST_IS_REG(src)) {
inst = (sljit_ub*)ensure_buf(compiler, 1 + 1 + 1);
FAIL_IF(!inst);
INC_SIZE(1 + 1);
PUSH_REG(reg_map[src]);
}
else if (src & SLJIT_MEM) {
inst = emit_x86_instruction(compiler, 1, 0, 0, src, srcw);
FAIL_IF(!inst);
*inst++ = GROUP_FF;
*inst |= PUSH_rm;
inst = (sljit_ub*)ensure_buf(compiler, 1 + 1);
FAIL_IF(!inst);
INC_SIZE(1);
}
else {
/* SLJIT_IMM. */
inst = (sljit_ub*)ensure_buf(compiler, 1 + 5 + 1);
FAIL_IF(!inst);
INC_SIZE(5 + 1);
*inst++ = PUSH_i32;
*(sljit_sw*)inst = srcw;
inst += sizeof(sljit_sw);
}
RET();
return SLJIT_SUCCESS;
}

View File

@ -0,0 +1,747 @@
/*
* Stack-less Just-In-Time compiler
*
* Copyright 2009-2012 Zoltan Herczeg (hzmester@freemail.hu). All rights reserved.
*
* Redistribution and use in source and binary forms, with or without modification, are
* permitted provided that the following conditions are met:
*
* 1. Redistributions of source code must retain the above copyright notice, this list of
* conditions and the following disclaimer.
*
* 2. Redistributions in binary form must reproduce the above copyright notice, this list
* of conditions and the following disclaimer in the documentation and/or other materials
* provided with the distribution.
*
* THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDER(S) AND CONTRIBUTORS ``AS IS'' AND ANY
* EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES
* OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT
* SHALL THE COPYRIGHT HOLDER(S) OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT,
* INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED
* TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR
* BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
* CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN
* ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
*/
/* x86 64-bit arch dependent functions. */
static sljit_si emit_load_imm64(struct sljit_compiler *compiler, sljit_si reg, sljit_sw imm)
{
sljit_ub *inst;
inst = (sljit_ub*)ensure_buf(compiler, 1 + 2 + sizeof(sljit_sw));
FAIL_IF(!inst);
INC_SIZE(2 + sizeof(sljit_sw));
*inst++ = REX_W | ((reg_map[reg] <= 7) ? 0 : REX_B);
*inst++ = MOV_r_i32 + (reg_map[reg] & 0x7);
*(sljit_sw*)inst = imm;
return SLJIT_SUCCESS;
}
static sljit_ub* generate_far_jump_code(struct sljit_jump *jump, sljit_ub *code_ptr, sljit_si type)
{
if (type < SLJIT_JUMP) {
/* Invert type. */
*code_ptr++ = get_jump_code(type ^ 0x1) - 0x10;
*code_ptr++ = 10 + 3;
}
SLJIT_COMPILE_ASSERT(reg_map[TMP_REG3] == 9, tmp3_is_9_first);
*code_ptr++ = REX_W | REX_B;
*code_ptr++ = MOV_r_i32 + 1;
jump->addr = (sljit_uw)code_ptr;
if (jump->flags & JUMP_LABEL)
jump->flags |= PATCH_MD;
else
*(sljit_sw*)code_ptr = jump->u.target;
code_ptr += sizeof(sljit_sw);
*code_ptr++ = REX_B;
*code_ptr++ = GROUP_FF;
*code_ptr++ = (type >= SLJIT_FAST_CALL) ? (MOD_REG | CALL_rm | 1) : (MOD_REG | JMP_rm | 1);
return code_ptr;
}
static sljit_ub* generate_fixed_jump(sljit_ub *code_ptr, sljit_sw addr, sljit_si type)
{
sljit_sw delta = addr - ((sljit_sw)code_ptr + 1 + sizeof(sljit_si));
if (delta <= HALFWORD_MAX && delta >= HALFWORD_MIN) {
*code_ptr++ = (type == 2) ? CALL_i32 : JMP_i32;
*(sljit_sw*)code_ptr = delta;
}
else {
SLJIT_COMPILE_ASSERT(reg_map[TMP_REG3] == 9, tmp3_is_9_second);
*code_ptr++ = REX_W | REX_B;
*code_ptr++ = MOV_r_i32 + 1;
*(sljit_sw*)code_ptr = addr;
code_ptr += sizeof(sljit_sw);
*code_ptr++ = REX_B;
*code_ptr++ = GROUP_FF;
*code_ptr++ = (type == 2) ? (MOD_REG | CALL_rm | 1) : (MOD_REG | JMP_rm | 1);
}
return code_ptr;
}
SLJIT_API_FUNC_ATTRIBUTE sljit_si sljit_emit_enter(struct sljit_compiler *compiler,
sljit_si options, sljit_si args, sljit_si scratches, sljit_si saveds,
sljit_si fscratches, sljit_si fsaveds, sljit_si local_size)
{
sljit_si i, tmp, size, saved_register_size;
sljit_ub *inst;
CHECK_ERROR();
CHECK(check_sljit_emit_enter(compiler, options, args, scratches, saveds, fscratches, fsaveds, local_size));
set_emit_enter(compiler, options, args, scratches, saveds, fscratches, fsaveds, local_size);
compiler->flags_saved = 0;
/* Including the return address saved by the call instruction. */
saved_register_size = GET_SAVED_REGISTERS_SIZE(scratches, saveds, 1);
tmp = saveds < SLJIT_NUMBER_OF_SAVED_REGISTERS ? (SLJIT_S0 + 1 - saveds) : SLJIT_FIRST_SAVED_REG;
for (i = SLJIT_S0; i >= tmp; i--) {
size = reg_map[i] >= 8 ? 2 : 1;
inst = (sljit_ub*)ensure_buf(compiler, 1 + size);
FAIL_IF(!inst);
INC_SIZE(size);
if (reg_map[i] >= 8)
*inst++ = REX_B;
PUSH_REG(reg_lmap[i]);
}
for (i = scratches; i >= SLJIT_FIRST_SAVED_REG; i--) {
size = reg_map[i] >= 8 ? 2 : 1;
inst = (sljit_ub*)ensure_buf(compiler, 1 + size);
FAIL_IF(!inst);
INC_SIZE(size);
if (reg_map[i] >= 8)
*inst++ = REX_B;
PUSH_REG(reg_lmap[i]);
}
if (args > 0) {
size = args * 3;
inst = (sljit_ub*)ensure_buf(compiler, 1 + size);
FAIL_IF(!inst);
INC_SIZE(size);
#ifndef _WIN64
if (args > 0) {
*inst++ = REX_W;
*inst++ = MOV_r_rm;
*inst++ = MOD_REG | (reg_map[SLJIT_S0] << 3) | 0x7 /* rdi */;
}
if (args > 1) {
*inst++ = REX_W | REX_R;
*inst++ = MOV_r_rm;
*inst++ = MOD_REG | (reg_lmap[SLJIT_S1] << 3) | 0x6 /* rsi */;
}
if (args > 2) {
*inst++ = REX_W | REX_R;
*inst++ = MOV_r_rm;
*inst++ = MOD_REG | (reg_lmap[SLJIT_S2] << 3) | 0x2 /* rdx */;
}
#else
if (args > 0) {
*inst++ = REX_W;
*inst++ = MOV_r_rm;
*inst++ = MOD_REG | (reg_map[SLJIT_S0] << 3) | 0x1 /* rcx */;
}
if (args > 1) {
*inst++ = REX_W;
*inst++ = MOV_r_rm;
*inst++ = MOD_REG | (reg_map[SLJIT_S1] << 3) | 0x2 /* rdx */;
}
if (args > 2) {
*inst++ = REX_W | REX_B;
*inst++ = MOV_r_rm;
*inst++ = MOD_REG | (reg_map[SLJIT_S2] << 3) | 0x0 /* r8 */;
}
#endif
}
local_size = ((local_size + SLJIT_LOCALS_OFFSET + saved_register_size + 15) & ~15) - saved_register_size;
compiler->local_size = local_size;
#ifdef _WIN64
if (local_size > 1024) {
/* Allocate stack for the callback, which grows the stack. */
inst = (sljit_ub*)ensure_buf(compiler, 1 + 4 + (3 + sizeof(sljit_si)));
FAIL_IF(!inst);
INC_SIZE(4 + (3 + sizeof(sljit_si)));
*inst++ = REX_W;
*inst++ = GROUP_BINARY_83;
*inst++ = MOD_REG | SUB | 4;
/* Allocated size for registers must be divisible by 8. */
SLJIT_ASSERT(!(saved_register_size & 0x7));
/* Aligned to 16 byte. */
if (saved_register_size & 0x8) {
*inst++ = 5 * sizeof(sljit_sw);
local_size -= 5 * sizeof(sljit_sw);
} else {
*inst++ = 4 * sizeof(sljit_sw);
local_size -= 4 * sizeof(sljit_sw);
}
/* Second instruction */
SLJIT_COMPILE_ASSERT(reg_map[SLJIT_R0] < 8, temporary_reg1_is_loreg);
*inst++ = REX_W;
*inst++ = MOV_rm_i32;
*inst++ = MOD_REG | reg_lmap[SLJIT_R0];
*(sljit_si*)inst = local_size;
#if (defined SLJIT_VERBOSE && SLJIT_VERBOSE) \
|| (defined SLJIT_ARGUMENT_CHECKS && SLJIT_ARGUMENT_CHECKS)
compiler->skip_checks = 1;
#endif
FAIL_IF(sljit_emit_ijump(compiler, SLJIT_CALL1, SLJIT_IMM, SLJIT_FUNC_OFFSET(sljit_grow_stack)));
}
#endif
SLJIT_ASSERT(local_size > 0);
if (local_size <= 127) {
inst = (sljit_ub*)ensure_buf(compiler, 1 + 4);
FAIL_IF(!inst);
INC_SIZE(4);
*inst++ = REX_W;
*inst++ = GROUP_BINARY_83;
*inst++ = MOD_REG | SUB | 4;
*inst++ = local_size;
}
else {
inst = (sljit_ub*)ensure_buf(compiler, 1 + 7);
FAIL_IF(!inst);
INC_SIZE(7);
*inst++ = REX_W;
*inst++ = GROUP_BINARY_81;
*inst++ = MOD_REG | SUB | 4;
*(sljit_si*)inst = local_size;
inst += sizeof(sljit_si);
}
#ifdef _WIN64
/* Save xmm6 register: movaps [rsp + 0x20], xmm6 */
if (fscratches >= 6 || fsaveds >= 1) {
inst = (sljit_ub*)ensure_buf(compiler, 1 + 5);
FAIL_IF(!inst);
INC_SIZE(5);
*inst++ = GROUP_0F;
*(sljit_si*)inst = 0x20247429;
}
#endif
return SLJIT_SUCCESS;
}
SLJIT_API_FUNC_ATTRIBUTE sljit_si sljit_set_context(struct sljit_compiler *compiler,
sljit_si options, sljit_si args, sljit_si scratches, sljit_si saveds,
sljit_si fscratches, sljit_si fsaveds, sljit_si local_size)
{
sljit_si saved_register_size;
CHECK_ERROR();
CHECK(check_sljit_set_context(compiler, options, args, scratches, saveds, fscratches, fsaveds, local_size));
set_set_context(compiler, options, args, scratches, saveds, fscratches, fsaveds, local_size);
/* Including the return address saved by the call instruction. */
saved_register_size = GET_SAVED_REGISTERS_SIZE(scratches, saveds, 1);
compiler->local_size = ((local_size + SLJIT_LOCALS_OFFSET + saved_register_size + 15) & ~15) - saved_register_size;
return SLJIT_SUCCESS;
}
SLJIT_API_FUNC_ATTRIBUTE sljit_si sljit_emit_return(struct sljit_compiler *compiler, sljit_si op, sljit_si src, sljit_sw srcw)
{
sljit_si i, tmp, size;
sljit_ub *inst;
CHECK_ERROR();
CHECK(check_sljit_emit_return(compiler, op, src, srcw));
compiler->flags_saved = 0;
FAIL_IF(emit_mov_before_return(compiler, op, src, srcw));
#ifdef _WIN64
/* Restore xmm6 register: movaps xmm6, [rsp + 0x20] */
if (compiler->fscratches >= 6 || compiler->fsaveds >= 1) {
inst = (sljit_ub*)ensure_buf(compiler, 1 + 5);
FAIL_IF(!inst);
INC_SIZE(5);
*inst++ = GROUP_0F;
*(sljit_si*)inst = 0x20247428;
}
#endif
SLJIT_ASSERT(compiler->local_size > 0);
if (compiler->local_size <= 127) {
inst = (sljit_ub*)ensure_buf(compiler, 1 + 4);
FAIL_IF(!inst);
INC_SIZE(4);
*inst++ = REX_W;
*inst++ = GROUP_BINARY_83;
*inst++ = MOD_REG | ADD | 4;
*inst = compiler->local_size;
}
else {
inst = (sljit_ub*)ensure_buf(compiler, 1 + 7);
FAIL_IF(!inst);
INC_SIZE(7);
*inst++ = REX_W;
*inst++ = GROUP_BINARY_81;
*inst++ = MOD_REG | ADD | 4;
*(sljit_si*)inst = compiler->local_size;
}
tmp = compiler->scratches;
for (i = SLJIT_FIRST_SAVED_REG; i <= tmp; i++) {
size = reg_map[i] >= 8 ? 2 : 1;
inst = (sljit_ub*)ensure_buf(compiler, 1 + size);
FAIL_IF(!inst);
INC_SIZE(size);
if (reg_map[i] >= 8)
*inst++ = REX_B;
POP_REG(reg_lmap[i]);
}
tmp = compiler->saveds < SLJIT_NUMBER_OF_SAVED_REGISTERS ? (SLJIT_S0 + 1 - compiler->saveds) : SLJIT_FIRST_SAVED_REG;
for (i = tmp; i <= SLJIT_S0; i++) {
size = reg_map[i] >= 8 ? 2 : 1;
inst = (sljit_ub*)ensure_buf(compiler, 1 + size);
FAIL_IF(!inst);
INC_SIZE(size);
if (reg_map[i] >= 8)
*inst++ = REX_B;
POP_REG(reg_lmap[i]);
}
inst = (sljit_ub*)ensure_buf(compiler, 1 + 1);
FAIL_IF(!inst);
INC_SIZE(1);
RET();
return SLJIT_SUCCESS;
}
/* --------------------------------------------------------------------- */
/* Operators */
/* --------------------------------------------------------------------- */
static sljit_si emit_do_imm32(struct sljit_compiler *compiler, sljit_ub rex, sljit_ub opcode, sljit_sw imm)
{
sljit_ub *inst;
sljit_si length = 1 + (rex ? 1 : 0) + sizeof(sljit_si);
inst = (sljit_ub*)ensure_buf(compiler, 1 + length);
FAIL_IF(!inst);
INC_SIZE(length);
if (rex)
*inst++ = rex;
*inst++ = opcode;
*(sljit_si*)inst = imm;
return SLJIT_SUCCESS;
}
static sljit_ub* emit_x86_instruction(struct sljit_compiler *compiler, sljit_si size,
/* The register or immediate operand. */
sljit_si a, sljit_sw imma,
/* The general operand (not immediate). */
sljit_si b, sljit_sw immb)
{
sljit_ub *inst;
sljit_ub *buf_ptr;
sljit_ub rex = 0;
sljit_si flags = size & ~0xf;
sljit_si inst_size;
/* The immediate operand must be 32 bit. */
SLJIT_ASSERT(!(a & SLJIT_IMM) || compiler->mode32 || IS_HALFWORD(imma));
/* Both cannot be switched on. */
SLJIT_ASSERT((flags & (EX86_BIN_INS | EX86_SHIFT_INS)) != (EX86_BIN_INS | EX86_SHIFT_INS));
/* Size flags not allowed for typed instructions. */
SLJIT_ASSERT(!(flags & (EX86_BIN_INS | EX86_SHIFT_INS)) || (flags & (EX86_BYTE_ARG | EX86_HALF_ARG)) == 0);
/* Both size flags cannot be switched on. */
SLJIT_ASSERT((flags & (EX86_BYTE_ARG | EX86_HALF_ARG)) != (EX86_BYTE_ARG | EX86_HALF_ARG));
/* SSE2 and immediate is not possible. */
SLJIT_ASSERT(!(a & SLJIT_IMM) || !(flags & EX86_SSE2));
SLJIT_ASSERT((flags & (EX86_PREF_F2 | EX86_PREF_F3)) != (EX86_PREF_F2 | EX86_PREF_F3)
&& (flags & (EX86_PREF_F2 | EX86_PREF_66)) != (EX86_PREF_F2 | EX86_PREF_66)
&& (flags & (EX86_PREF_F3 | EX86_PREF_66)) != (EX86_PREF_F3 | EX86_PREF_66));
size &= 0xf;
inst_size = size;
if (!compiler->mode32 && !(flags & EX86_NO_REXW))
rex |= REX_W;
else if (flags & EX86_REX)
rex |= REX;
if (flags & (EX86_PREF_F2 | EX86_PREF_F3))
inst_size++;
if (flags & EX86_PREF_66)
inst_size++;
/* Calculate size of b. */
inst_size += 1; /* mod r/m byte. */
if (b & SLJIT_MEM) {
if (!(b & OFFS_REG_MASK)) {
if (NOT_HALFWORD(immb)) {
if (emit_load_imm64(compiler, TMP_REG3, immb))
return NULL;
immb = 0;
if (b & REG_MASK)
b |= TO_OFFS_REG(TMP_REG3);
else
b |= TMP_REG3;
}
else if (reg_lmap[b & REG_MASK] == 4)
b |= TO_OFFS_REG(SLJIT_SP);
}
if ((b & REG_MASK) == SLJIT_UNUSED)
inst_size += 1 + sizeof(sljit_si); /* SIB byte required to avoid RIP based addressing. */
else {
if (reg_map[b & REG_MASK] >= 8)
rex |= REX_B;
if (immb != 0 && (!(b & OFFS_REG_MASK) || (b & OFFS_REG_MASK) == TO_OFFS_REG(SLJIT_SP))) {
/* Immediate operand. */
if (immb <= 127 && immb >= -128)
inst_size += sizeof(sljit_sb);
else
inst_size += sizeof(sljit_si);
}
else if (reg_lmap[b & REG_MASK] == 5)
inst_size += sizeof(sljit_sb);
if ((b & OFFS_REG_MASK) != SLJIT_UNUSED) {
inst_size += 1; /* SIB byte. */
if (reg_map[OFFS_REG(b)] >= 8)
rex |= REX_X;
}
}
}
else if (!(flags & EX86_SSE2_OP2) && reg_map[b] >= 8)
rex |= REX_B;
if (a & SLJIT_IMM) {
if (flags & EX86_BIN_INS) {
if (imma <= 127 && imma >= -128) {
inst_size += 1;
flags |= EX86_BYTE_ARG;
} else
inst_size += 4;
}
else if (flags & EX86_SHIFT_INS) {
imma &= compiler->mode32 ? 0x1f : 0x3f;
if (imma != 1) {
inst_size ++;
flags |= EX86_BYTE_ARG;
}
} else if (flags & EX86_BYTE_ARG)
inst_size++;
else if (flags & EX86_HALF_ARG)
inst_size += sizeof(short);
else
inst_size += sizeof(sljit_si);
}
else {
SLJIT_ASSERT(!(flags & EX86_SHIFT_INS) || a == SLJIT_PREF_SHIFT_REG);
/* reg_map[SLJIT_PREF_SHIFT_REG] is less than 8. */
if (!(flags & EX86_SSE2_OP1) && reg_map[a] >= 8)
rex |= REX_R;
}
if (rex)
inst_size++;
inst = (sljit_ub*)ensure_buf(compiler, 1 + inst_size);
PTR_FAIL_IF(!inst);
/* Encoding the byte. */
INC_SIZE(inst_size);
if (flags & EX86_PREF_F2)
*inst++ = 0xf2;
if (flags & EX86_PREF_F3)
*inst++ = 0xf3;
if (flags & EX86_PREF_66)
*inst++ = 0x66;
if (rex)
*inst++ = rex;
buf_ptr = inst + size;
/* Encode mod/rm byte. */
if (!(flags & EX86_SHIFT_INS)) {
if ((flags & EX86_BIN_INS) && (a & SLJIT_IMM))
*inst = (flags & EX86_BYTE_ARG) ? GROUP_BINARY_83 : GROUP_BINARY_81;
if ((a & SLJIT_IMM) || (a == 0))
*buf_ptr = 0;
else if (!(flags & EX86_SSE2_OP1))
*buf_ptr = reg_lmap[a] << 3;
else
*buf_ptr = a << 3;
}
else {
if (a & SLJIT_IMM) {
if (imma == 1)
*inst = GROUP_SHIFT_1;
else
*inst = GROUP_SHIFT_N;
} else
*inst = GROUP_SHIFT_CL;
*buf_ptr = 0;
}
if (!(b & SLJIT_MEM))
*buf_ptr++ |= MOD_REG + ((!(flags & EX86_SSE2_OP2)) ? reg_lmap[b] : b);
else if ((b & REG_MASK) != SLJIT_UNUSED) {
if ((b & OFFS_REG_MASK) == SLJIT_UNUSED || (b & OFFS_REG_MASK) == TO_OFFS_REG(SLJIT_SP)) {
if (immb != 0 || reg_lmap[b & REG_MASK] == 5) {
if (immb <= 127 && immb >= -128)
*buf_ptr |= 0x40;
else
*buf_ptr |= 0x80;
}
if ((b & OFFS_REG_MASK) == SLJIT_UNUSED)
*buf_ptr++ |= reg_lmap[b & REG_MASK];
else {
*buf_ptr++ |= 0x04;
*buf_ptr++ = reg_lmap[b & REG_MASK] | (reg_lmap[OFFS_REG(b)] << 3);
}
if (immb != 0 || reg_lmap[b & REG_MASK] == 5) {
if (immb <= 127 && immb >= -128)
*buf_ptr++ = immb; /* 8 bit displacement. */
else {
*(sljit_si*)buf_ptr = immb; /* 32 bit displacement. */
buf_ptr += sizeof(sljit_si);
}
}
}
else {
if (reg_lmap[b & REG_MASK] == 5)
*buf_ptr |= 0x40;
*buf_ptr++ |= 0x04;
*buf_ptr++ = reg_lmap[b & REG_MASK] | (reg_lmap[OFFS_REG(b)] << 3) | (immb << 6);
if (reg_lmap[b & REG_MASK] == 5)
*buf_ptr++ = 0;
}
}
else {
*buf_ptr++ |= 0x04;
*buf_ptr++ = 0x25;
*(sljit_si*)buf_ptr = immb; /* 32 bit displacement. */
buf_ptr += sizeof(sljit_si);
}
if (a & SLJIT_IMM) {
if (flags & EX86_BYTE_ARG)
*buf_ptr = imma;
else if (flags & EX86_HALF_ARG)
*(short*)buf_ptr = imma;
else if (!(flags & EX86_SHIFT_INS))
*(sljit_si*)buf_ptr = imma;
}
return !(flags & EX86_SHIFT_INS) ? inst : (inst + 1);
}
/* --------------------------------------------------------------------- */
/* Call / return instructions */
/* --------------------------------------------------------------------- */
static SLJIT_INLINE sljit_si call_with_args(struct sljit_compiler *compiler, sljit_si type)
{
sljit_ub *inst;
#ifndef _WIN64
SLJIT_COMPILE_ASSERT(reg_map[SLJIT_R1] == 6 && reg_map[SLJIT_R0] < 8 && reg_map[SLJIT_R2] < 8, args_registers);
inst = (sljit_ub*)ensure_buf(compiler, 1 + ((type < SLJIT_CALL3) ? 3 : 6));
FAIL_IF(!inst);
INC_SIZE((type < SLJIT_CALL3) ? 3 : 6);
if (type >= SLJIT_CALL3) {
*inst++ = REX_W;
*inst++ = MOV_r_rm;
*inst++ = MOD_REG | (0x2 /* rdx */ << 3) | reg_lmap[SLJIT_R2];
}
*inst++ = REX_W;
*inst++ = MOV_r_rm;
*inst++ = MOD_REG | (0x7 /* rdi */ << 3) | reg_lmap[SLJIT_R0];
#else
SLJIT_COMPILE_ASSERT(reg_map[SLJIT_R1] == 2 && reg_map[SLJIT_R0] < 8 && reg_map[SLJIT_R2] < 8, args_registers);
inst = (sljit_ub*)ensure_buf(compiler, 1 + ((type < SLJIT_CALL3) ? 3 : 6));
FAIL_IF(!inst);
INC_SIZE((type < SLJIT_CALL3) ? 3 : 6);
if (type >= SLJIT_CALL3) {
*inst++ = REX_W | REX_R;
*inst++ = MOV_r_rm;
*inst++ = MOD_REG | (0x0 /* r8 */ << 3) | reg_lmap[SLJIT_R2];
}
*inst++ = REX_W;
*inst++ = MOV_r_rm;
*inst++ = MOD_REG | (0x1 /* rcx */ << 3) | reg_lmap[SLJIT_R0];
#endif
return SLJIT_SUCCESS;
}
SLJIT_API_FUNC_ATTRIBUTE sljit_si sljit_emit_fast_enter(struct sljit_compiler *compiler, sljit_si dst, sljit_sw dstw)
{
sljit_ub *inst;
CHECK_ERROR();
CHECK(check_sljit_emit_fast_enter(compiler, dst, dstw));
ADJUST_LOCAL_OFFSET(dst, dstw);
/* For UNUSED dst. Uncommon, but possible. */
if (dst == SLJIT_UNUSED)
dst = TMP_REG1;
if (FAST_IS_REG(dst)) {
if (reg_map[dst] < 8) {
inst = (sljit_ub*)ensure_buf(compiler, 1 + 1);
FAIL_IF(!inst);
INC_SIZE(1);
POP_REG(reg_lmap[dst]);
return SLJIT_SUCCESS;
}
inst = (sljit_ub*)ensure_buf(compiler, 1 + 2);
FAIL_IF(!inst);
INC_SIZE(2);
*inst++ = REX_B;
POP_REG(reg_lmap[dst]);
return SLJIT_SUCCESS;
}
/* REX_W is not necessary (src is not immediate). */
compiler->mode32 = 1;
inst = emit_x86_instruction(compiler, 1, 0, 0, dst, dstw);
FAIL_IF(!inst);
*inst++ = POP_rm;
return SLJIT_SUCCESS;
}
SLJIT_API_FUNC_ATTRIBUTE sljit_si sljit_emit_fast_return(struct sljit_compiler *compiler, sljit_si src, sljit_sw srcw)
{
sljit_ub *inst;
CHECK_ERROR();
CHECK(check_sljit_emit_fast_return(compiler, src, srcw));
ADJUST_LOCAL_OFFSET(src, srcw);
if ((src & SLJIT_IMM) && NOT_HALFWORD(srcw)) {
FAIL_IF(emit_load_imm64(compiler, TMP_REG1, srcw));
src = TMP_REG1;
}
if (FAST_IS_REG(src)) {
if (reg_map[src] < 8) {
inst = (sljit_ub*)ensure_buf(compiler, 1 + 1 + 1);
FAIL_IF(!inst);
INC_SIZE(1 + 1);
PUSH_REG(reg_lmap[src]);
}
else {
inst = (sljit_ub*)ensure_buf(compiler, 1 + 2 + 1);
FAIL_IF(!inst);
INC_SIZE(2 + 1);
*inst++ = REX_B;
PUSH_REG(reg_lmap[src]);
}
}
else if (src & SLJIT_MEM) {
/* REX_W is not necessary (src is not immediate). */
compiler->mode32 = 1;
inst = emit_x86_instruction(compiler, 1, 0, 0, src, srcw);
FAIL_IF(!inst);
*inst++ = GROUP_FF;
*inst |= PUSH_rm;
inst = (sljit_ub*)ensure_buf(compiler, 1 + 1);
FAIL_IF(!inst);
INC_SIZE(1);
}
else {
SLJIT_ASSERT(IS_HALFWORD(srcw));
/* SLJIT_IMM. */
inst = (sljit_ub*)ensure_buf(compiler, 1 + 5 + 1);
FAIL_IF(!inst);
INC_SIZE(5 + 1);
*inst++ = PUSH_i32;
*(sljit_si*)inst = srcw;
inst += sizeof(sljit_si);
}
RET();
return SLJIT_SUCCESS;
}
/* --------------------------------------------------------------------- */
/* Extend input */
/* --------------------------------------------------------------------- */
static sljit_si emit_mov_int(struct sljit_compiler *compiler, sljit_si sign,
sljit_si dst, sljit_sw dstw,
sljit_si src, sljit_sw srcw)
{
sljit_ub* inst;
sljit_si dst_r;
compiler->mode32 = 0;
if (dst == SLJIT_UNUSED && !(src & SLJIT_MEM))
return SLJIT_SUCCESS; /* Empty instruction. */
if (src & SLJIT_IMM) {
if (FAST_IS_REG(dst)) {
if (sign || ((sljit_uw)srcw <= 0x7fffffff)) {
inst = emit_x86_instruction(compiler, 1, SLJIT_IMM, (sljit_sw)(sljit_si)srcw, dst, dstw);
FAIL_IF(!inst);
*inst = MOV_rm_i32;
return SLJIT_SUCCESS;
}
return emit_load_imm64(compiler, dst, srcw);
}
compiler->mode32 = 1;
inst = emit_x86_instruction(compiler, 1, SLJIT_IMM, (sljit_sw)(sljit_si)srcw, dst, dstw);
FAIL_IF(!inst);
*inst = MOV_rm_i32;
compiler->mode32 = 0;
return SLJIT_SUCCESS;
}
dst_r = FAST_IS_REG(dst) ? dst : TMP_REG1;
if ((dst & SLJIT_MEM) && FAST_IS_REG(src))
dst_r = src;
else {
if (sign) {
inst = emit_x86_instruction(compiler, 1, dst_r, 0, src, srcw);
FAIL_IF(!inst);
*inst++ = MOVSXD_r_rm;
} else {
compiler->mode32 = 1;
FAIL_IF(emit_mov(compiler, dst_r, 0, src, srcw));
compiler->mode32 = 0;
}
}
if (dst & SLJIT_MEM) {
compiler->mode32 = 1;
inst = emit_x86_instruction(compiler, 1, dst_r, 0, dst, dstw);
FAIL_IF(!inst);
*inst = MOV_rm_r;
compiler->mode32 = 0;
}
return SLJIT_SUCCESS;
}

File diff suppressed because it is too large Load Diff

View File

@ -0,0 +1,334 @@
/*
* Stack-less Just-In-Time compiler
*
* Copyright 2009-2012 Zoltan Herczeg (hzmester@freemail.hu). All rights reserved.
*
* Redistribution and use in source and binary forms, with or without modification, are
* permitted provided that the following conditions are met:
*
* 1. Redistributions of source code must retain the above copyright notice, this list of
* conditions and the following disclaimer.
*
* 2. Redistributions in binary form must reproduce the above copyright notice, this list
* of conditions and the following disclaimer in the documentation and/or other materials
* provided with the distribution.
*
* THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDER(S) AND CONTRIBUTORS ``AS IS'' AND ANY
* EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES
* OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT
* SHALL THE COPYRIGHT HOLDER(S) OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT,
* INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED
* TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR
* BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
* CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN
* ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
*/
/* ------------------------------------------------------------------------ */
/* Locks */
/* ------------------------------------------------------------------------ */
#if (defined SLJIT_EXECUTABLE_ALLOCATOR && SLJIT_EXECUTABLE_ALLOCATOR) || (defined SLJIT_UTIL_GLOBAL_LOCK && SLJIT_UTIL_GLOBAL_LOCK)
#if (defined SLJIT_SINGLE_THREADED && SLJIT_SINGLE_THREADED)
#if (defined SLJIT_EXECUTABLE_ALLOCATOR && SLJIT_EXECUTABLE_ALLOCATOR)
static SLJIT_INLINE void allocator_grab_lock(void)
{
/* Always successful. */
}
static SLJIT_INLINE void allocator_release_lock(void)
{
/* Always successful. */
}
#endif /* SLJIT_EXECUTABLE_ALLOCATOR */
#if (defined SLJIT_UTIL_GLOBAL_LOCK && SLJIT_UTIL_GLOBAL_LOCK)
SLJIT_API_FUNC_ATTRIBUTE void SLJIT_CALL sljit_grab_lock(void)
{
/* Always successful. */
}
SLJIT_API_FUNC_ATTRIBUTE void SLJIT_CALL sljit_release_lock(void)
{
/* Always successful. */
}
#endif /* SLJIT_UTIL_GLOBAL_LOCK */
#elif defined(_WIN32) /* SLJIT_SINGLE_THREADED */
#include "windows.h"
#if (defined SLJIT_EXECUTABLE_ALLOCATOR && SLJIT_EXECUTABLE_ALLOCATOR)
static HANDLE allocator_mutex = 0;
static SLJIT_INLINE void allocator_grab_lock(void)
{
/* No idea what to do if an error occures. Static mutexes should never fail... */
if (!allocator_mutex)
allocator_mutex = CreateMutex(NULL, TRUE, NULL);
else
WaitForSingleObject(allocator_mutex, INFINITE);
}
static SLJIT_INLINE void allocator_release_lock(void)
{
ReleaseMutex(allocator_mutex);
}
#endif /* SLJIT_EXECUTABLE_ALLOCATOR */
#if (defined SLJIT_UTIL_GLOBAL_LOCK && SLJIT_UTIL_GLOBAL_LOCK)
static HANDLE global_mutex = 0;
SLJIT_API_FUNC_ATTRIBUTE void SLJIT_CALL sljit_grab_lock(void)
{
/* No idea what to do if an error occures. Static mutexes should never fail... */
if (!global_mutex)
global_mutex = CreateMutex(NULL, TRUE, NULL);
else
WaitForSingleObject(global_mutex, INFINITE);
}
SLJIT_API_FUNC_ATTRIBUTE void SLJIT_CALL sljit_release_lock(void)
{
ReleaseMutex(global_mutex);
}
#endif /* SLJIT_UTIL_GLOBAL_LOCK */
#else /* _WIN32 */
#if (defined SLJIT_EXECUTABLE_ALLOCATOR && SLJIT_EXECUTABLE_ALLOCATOR)
#include <pthread.h>
static pthread_mutex_t allocator_mutex = PTHREAD_MUTEX_INITIALIZER;
static SLJIT_INLINE void allocator_grab_lock(void)
{
pthread_mutex_lock(&allocator_mutex);
}
static SLJIT_INLINE void allocator_release_lock(void)
{
pthread_mutex_unlock(&allocator_mutex);
}
#endif /* SLJIT_EXECUTABLE_ALLOCATOR */
#if (defined SLJIT_UTIL_GLOBAL_LOCK && SLJIT_UTIL_GLOBAL_LOCK)
#include <pthread.h>
static pthread_mutex_t global_mutex = PTHREAD_MUTEX_INITIALIZER;
SLJIT_API_FUNC_ATTRIBUTE void SLJIT_CALL sljit_grab_lock(void)
{
pthread_mutex_lock(&global_mutex);
}
SLJIT_API_FUNC_ATTRIBUTE void SLJIT_CALL sljit_release_lock(void)
{
pthread_mutex_unlock(&global_mutex);
}
#endif /* SLJIT_UTIL_GLOBAL_LOCK */
#endif /* _WIN32 */
/* ------------------------------------------------------------------------ */
/* Stack */
/* ------------------------------------------------------------------------ */
#if (defined SLJIT_UTIL_STACK && SLJIT_UTIL_STACK) || (defined SLJIT_EXECUTABLE_ALLOCATOR && SLJIT_EXECUTABLE_ALLOCATOR)
#ifdef _WIN32
#include "windows.h"
#else
/* Provides mmap function. */
#include <sys/mman.h>
/* For detecting the page size. */
#include <unistd.h>
#ifndef MAP_ANON
#include <fcntl.h>
/* Some old systems does not have MAP_ANON. */
static sljit_si dev_zero = -1;
#if (defined SLJIT_SINGLE_THREADED && SLJIT_SINGLE_THREADED)
static SLJIT_INLINE sljit_si open_dev_zero(void)
{
dev_zero = open("/dev/zero", O_RDWR);
return dev_zero < 0;
}
#else /* SLJIT_SINGLE_THREADED */
#include <pthread.h>
static pthread_mutex_t dev_zero_mutex = PTHREAD_MUTEX_INITIALIZER;
static SLJIT_INLINE sljit_si open_dev_zero(void)
{
pthread_mutex_lock(&dev_zero_mutex);
dev_zero = open("/dev/zero", O_RDWR);
pthread_mutex_unlock(&dev_zero_mutex);
return dev_zero < 0;
}
#endif /* SLJIT_SINGLE_THREADED */
#endif
#endif
#endif /* SLJIT_UTIL_STACK || SLJIT_EXECUTABLE_ALLOCATOR */
#if (defined SLJIT_UTIL_STACK && SLJIT_UTIL_STACK)
/* Planning to make it even more clever in the future. */
static sljit_sw sljit_page_align = 0;
SLJIT_API_FUNC_ATTRIBUTE struct sljit_stack* SLJIT_CALL sljit_allocate_stack(sljit_uw limit, sljit_uw max_limit, void *allocator_data)
{
struct sljit_stack *stack;
union {
void *ptr;
sljit_uw uw;
} base;
#ifdef _WIN32
SYSTEM_INFO si;
#endif
SLJIT_UNUSED_ARG(allocator_data);
if (limit > max_limit || limit < 1)
return NULL;
#ifdef _WIN32
if (!sljit_page_align) {
GetSystemInfo(&si);
sljit_page_align = si.dwPageSize - 1;
}
#else
if (!sljit_page_align) {
sljit_page_align = sysconf(_SC_PAGESIZE);
/* Should never happen. */
if (sljit_page_align < 0)
sljit_page_align = 4096;
sljit_page_align--;
}
#endif
/* Align limit and max_limit. */
max_limit = (max_limit + sljit_page_align) & ~sljit_page_align;
stack = (struct sljit_stack*)SLJIT_MALLOC(sizeof(struct sljit_stack), allocator_data);
if (!stack)
return NULL;
#ifdef _WIN32
base.ptr = VirtualAlloc(NULL, max_limit, MEM_RESERVE, PAGE_READWRITE);
if (!base.ptr) {
SLJIT_FREE(stack, allocator_data);
return NULL;
}
stack->base = base.uw;
stack->limit = stack->base;
stack->max_limit = stack->base + max_limit;
if (sljit_stack_resize(stack, stack->base + limit)) {
sljit_free_stack(stack, allocator_data);
return NULL;
}
#else
#ifdef MAP_ANON
base.ptr = mmap(NULL, max_limit, PROT_READ | PROT_WRITE, MAP_PRIVATE | MAP_ANON, -1, 0);
#else
if (dev_zero < 0) {
if (open_dev_zero()) {
SLJIT_FREE(stack, allocator_data);
return NULL;
}
}
base.ptr = mmap(NULL, max_limit, PROT_READ | PROT_WRITE, MAP_PRIVATE, dev_zero, 0);
#endif
if (base.ptr == MAP_FAILED) {
SLJIT_FREE(stack, allocator_data);
return NULL;
}
stack->base = base.uw;
stack->limit = stack->base + limit;
stack->max_limit = stack->base + max_limit;
#endif
stack->top = stack->base;
return stack;
}
#undef PAGE_ALIGN
SLJIT_API_FUNC_ATTRIBUTE void SLJIT_CALL sljit_free_stack(struct sljit_stack* stack, void *allocator_data)
{
SLJIT_UNUSED_ARG(allocator_data);
#ifdef _WIN32
VirtualFree((void*)stack->base, 0, MEM_RELEASE);
#else
munmap((void*)stack->base, stack->max_limit - stack->base);
#endif
SLJIT_FREE(stack, allocator_data);
}
SLJIT_API_FUNC_ATTRIBUTE sljit_sw SLJIT_CALL sljit_stack_resize(struct sljit_stack* stack, sljit_uw new_limit)
{
sljit_uw aligned_old_limit;
sljit_uw aligned_new_limit;
if ((new_limit > stack->max_limit) || (new_limit < stack->base))
return -1;
#ifdef _WIN32
aligned_new_limit = (new_limit + sljit_page_align) & ~sljit_page_align;
aligned_old_limit = (stack->limit + sljit_page_align) & ~sljit_page_align;
if (aligned_new_limit != aligned_old_limit) {
if (aligned_new_limit > aligned_old_limit) {
if (!VirtualAlloc((void*)aligned_old_limit, aligned_new_limit - aligned_old_limit, MEM_COMMIT, PAGE_READWRITE))
return -1;
}
else {
if (!VirtualFree((void*)aligned_new_limit, aligned_old_limit - aligned_new_limit, MEM_DECOMMIT))
return -1;
}
}
stack->limit = new_limit;
return 0;
#else
if (new_limit >= stack->limit) {
stack->limit = new_limit;
return 0;
}
aligned_new_limit = (new_limit + sljit_page_align) & ~sljit_page_align;
aligned_old_limit = (stack->limit + sljit_page_align) & ~sljit_page_align;
/* If madvise is available, we release the unnecessary space. */
#if defined(MADV_DONTNEED)
if (aligned_new_limit < aligned_old_limit)
madvise((void*)aligned_new_limit, aligned_old_limit - aligned_new_limit, MADV_DONTNEED);
#elif defined(POSIX_MADV_DONTNEED)
if (aligned_new_limit < aligned_old_limit)
posix_madvise((void*)aligned_new_limit, aligned_old_limit - aligned_new_limit, POSIX_MADV_DONTNEED);
#endif
stack->limit = new_limit;
return 0;
#endif
}
#endif /* SLJIT_UTIL_STACK */
#endif