Peter Kaminski
b4bb5d028f
differentiate shim and library
2023-10-08 07:04:12 -07:00
Peter Kaminski
0b886443e9
differentiate shim and library
2023-10-08 06:47:58 -07:00
Peter Kaminski
f6b304d105
Tiny wording change
...
"call" -> "use the"
2023-10-08 06:32:10 -07:00
Peter Kaminski
88750a26b6
flesh out README and make readability improvments
2023-10-08 06:16:45 -07:00
Peter Kaminski
6f7da1910f
make script more readable; no operational changes
2023-10-08 06:15:51 -07:00
Peter Kaminski
a955b4ebc4
if venv is used, it's nice to ignore it
2023-10-08 06:13:28 -07:00
36a4ebf487
Merge pull request #15 from jepler/issue5
...
Dynamically load requirements.txt in pyproject.toml
2023-10-07 09:30:25 +01:00
6b201e6a49
fix markdown
2023-10-07 09:29:33 +01:00
6af267cb47
update README.md
2023-10-07 09:27:49 +01:00
6298f5cac7
this script allows python -mchap in top-level to work
2023-10-07 08:19:25 +01:00
7cf8496302
Merge pull request #18 from jepler/misc-ux
...
Misc ux improvements
2023-10-02 18:33:08 -05:00
94562de018
ctrl-c, not ctrl-q, is quit
2023-10-02 15:19:28 -05:00
3852855a06
Fix an error toggling the system message in empty session
2023-10-02 15:19:28 -05:00
9ed7dd97cf
prevent resubmit/redraft of the system message
2023-10-02 15:19:28 -05:00
bec3e67be3
prevent focusing into markdown while completing
...
this could lead to problems e.g., if you hit the "resubmit" key
2023-10-02 15:19:28 -05:00
f9d3575964
erase input value even for interrupted call
...
user can redraft
2023-10-02 15:19:28 -05:00
5574d31e0c
Say how pad seems to be helping
2023-10-02 15:19:17 -05:00
4e96c2928b
rename 'delete to end' as 'redraft'
2023-10-02 14:30:14 -05:00
ad1d956dad
Further improve UX during generation
...
.. by adding a LoadingIndicator and styling the button nicer
2023-10-02 14:30:13 -05:00
892b66aa39
Merge pull request #17 from jepler/add-codespell
...
add codespell to pre-commit
2023-10-02 10:26:06 -05:00
9351b4825d
add codespell
2023-10-02 10:01:12 -05:00
a956dc6bff
Merge pull request #16 from jepler/cancel
...
chap tui: add ability to cancel generation with escape key
2023-10-02 06:00:45 -05:00
9d03cd2210
chap tui: add ability to cancel generation with escape key
...
also reduces jank during the initial load; the app is mounted
with all conversation displayed & scrolled to the bottom.
2023-10-02 05:28:50 -05:00
9d86e30f69
Dynamically load requirements.txt in pyproject.toml
...
This allows use of `pip install -r requirements.txt` by users who prefer
that way of trying out the package, while not affecting my typical usage.
This does create a dependency on a beta feature of setuptools, so it
could be fragile.
Closes : #5
2023-10-02 03:57:58 -05:00
f3bf17ca2f
Merge pull request #14 from jepler/huggingface
...
Add huggingface back-end
2023-09-29 10:18:58 -05:00
b6fa44f53e
Add huggingface back-end
...
defaults to mistral 7b instruct
2023-09-29 10:14:05 -05:00
2c04964b93
Improve display of default string params with special chars
2023-09-29 10:12:29 -05:00
90a4f17910
increase first-token timeout
2023-09-29 09:02:52 -05:00
6792eb0960
set some stop tokens
2023-09-29 08:45:54 -05:00
ea03aa0f20
Use llama2-instruct style prompting
...
this also works well with mistral-7b-instruct
See https://github.com/facebookresearch/llama/blob/v2/llama/generation.py
2023-09-29 08:39:33 -05:00
9fe01de170
Add ability to toggle off history context in tui
2023-09-29 08:39:33 -05:00
9919c9a229
Merge pull request #13 from jepler/interactivity
...
More Interactivity in the TUI
2023-09-25 07:05:56 -05:00
0f9c6f1369
tui: can now delete part of history, or resubmit a prior prompt
2023-09-24 19:59:04 -05:00
a0322362fb
Allow "chap -S @filename" to specify a system prompt from a file
2023-09-24 19:31:01 -05:00
eefd5063ac
chap tui: fix focusing on VerticalScroll inside Markdown
2023-09-24 19:29:55 -05:00
7c9c6963ce
Make chap ask ... > output not use CR-overwriting of lines
2023-09-24 19:29:29 -05:00
1b700aacfb
Merge pull request #12 from jepler/llama_cpp
...
Add llama.cpp support
2023-09-24 15:41:12 -05:00
26912b1fe9
that link just doesn't format well in the docs
2023-09-24 15:28:20 -05:00
cbcdec41fd
Move --backend, -B to base command
2023-09-24 15:27:50 -05:00
80feb624a5
markup
2023-09-24 15:27:24 -05:00
ec57f84eef
don't diss textgen so hard
2023-09-24 15:27:19 -05:00
b2aae88195
drop backend-help, it's integrated
2023-09-24 15:14:21 -05:00
382a8fb520
Add a backend list command
2023-09-24 15:14:06 -05:00
5d394b5fcf
document llama_cpp and environment vars
2023-09-24 14:56:54 -05:00
1d24aa6381
set default backend in environment
2023-09-24 14:56:47 -05:00
d7ad89f411
Allow configuration of backend parameters from environment
2023-09-24 14:50:09 -05:00
4a963fe23b
Add llama.cpp backend
...
Assumes your model wants llama 1.5-style prompting. Happy to add other
styles.
I had a decent experience with the Vicuna-13B-CoT.Q5_K_M.gguf model,
which fits in GPU on a 12GB RTX 3060.
2023-09-24 14:49:31 -05:00
02de0b3163
Merge pull request #11 from jepler/backend-option-help
...
Add background option help; add chatgpt max-request-tokens
2023-09-24 11:01:13 -05:00
c2d801daf5
underscore is dash in backend options
2023-09-24 10:59:09 -05:00
76ac57fad1
Integrate backend options help into regular help
2023-09-24 10:56:38 -05:00