All the clever stuff in this library is provided by Python's builtin ast
module and compilation/exec/eval system, along with IPython's CachingCompiler
which does some deep magic. tinykernel
just brings them together with a little glue.
With pip:
pip install tinykernel
With conda:
conda install -c fastai tinykernel
This library provides a single class, TinyKernel
, which is a tiny persistent kernel for Python code:
k = TinyKernel()
Call it, passing Python code, to have the code executed in a separate Python environment:
k("a=1")
Expressions return the value of the expression:
k('a')
All variables are persisted across calls:
k("a+=1")
k('a')
Multi-line inputs are supported. If the last line is an expression, it is returned:
k("""import types
b = types.SimpleNamespace(foo=a)
b""")
The original source code is stored, so inspect.getsource
works and, tracebacks have full details.
k("""def f(): pass # a comment
import inspect
inspect.getsource(f)""")
When creating a TinyKernel
, you can pass a dict of globals to initialize the environment:
k = TinyKernel(glb={'foo':'bar'})
k('foo*2')
Pass name
to customize the string that appears in tracebacks ("kernel" by default):
k = TinyKernel(name='myapp')
code = '''def f():
return 1/0
print(f())'''
try: k(code)
except Exception as e: print(traceback.format_exc())
Thanks to Christopher Prohm, Matthias Bussonnier, and Aaron Meurer for their helpful insights in this twitter thread.