9351b4825d
add codespell
2023-10-02 10:01:12 -05:00
a956dc6bff
Merge pull request #16 from jepler/cancel
...
chap tui: add ability to cancel generation with escape key
2023-10-02 06:00:45 -05:00
9d03cd2210
chap tui: add ability to cancel generation with escape key
...
also reduces jank during the initial load; the app is mounted
with all conversation displayed & scrolled to the bottom.
2023-10-02 05:28:50 -05:00
f3bf17ca2f
Merge pull request #14 from jepler/huggingface
...
Add huggingface back-end
2023-09-29 10:18:58 -05:00
b6fa44f53e
Add huggingface back-end
...
defaults to mistral 7b instruct
2023-09-29 10:14:05 -05:00
2c04964b93
Improve display of default string params with special chars
2023-09-29 10:12:29 -05:00
90a4f17910
increase first-token timeout
2023-09-29 09:02:52 -05:00
6792eb0960
set some stop tokens
2023-09-29 08:45:54 -05:00
ea03aa0f20
Use llama2-instruct style prompting
...
this also works well with mistral-7b-instruct
See https://github.com/facebookresearch/llama/blob/v2/llama/generation.py
2023-09-29 08:39:33 -05:00
9fe01de170
Add ability to toggle off history context in tui
2023-09-29 08:39:33 -05:00
9919c9a229
Merge pull request #13 from jepler/interactivity
...
More Interactivity in the TUI
2023-09-25 07:05:56 -05:00
0f9c6f1369
tui: can now delete part of history, or resubmit a prior prompt
2023-09-24 19:59:04 -05:00
a0322362fb
Allow "chap -S @filename" to specify a system prompt from a file
2023-09-24 19:31:01 -05:00
eefd5063ac
chap tui: fix focusing on VerticalScroll inside Markdown
2023-09-24 19:29:55 -05:00
7c9c6963ce
Make chap ask ... > output not use CR-overwriting of lines
2023-09-24 19:29:29 -05:00
1b700aacfb
Merge pull request #12 from jepler/llama_cpp
...
Add llama.cpp support
2023-09-24 15:41:12 -05:00
26912b1fe9
that link just doesn't format well in the docs
2023-09-24 15:28:20 -05:00
cbcdec41fd
Move --backend, -B to base command
2023-09-24 15:27:50 -05:00
80feb624a5
markup
2023-09-24 15:27:24 -05:00
ec57f84eef
don't diss textgen so hard
2023-09-24 15:27:19 -05:00
b2aae88195
drop backend-help, it's integrated
2023-09-24 15:14:21 -05:00
382a8fb520
Add a backend list command
2023-09-24 15:14:06 -05:00
5d394b5fcf
document llama_cpp and environment vars
2023-09-24 14:56:54 -05:00
1d24aa6381
set default backend in environment
2023-09-24 14:56:47 -05:00
d7ad89f411
Allow configuration of backend parameters from environment
2023-09-24 14:50:09 -05:00
4a963fe23b
Add llama.cpp backend
...
Assumes your model wants llama 1.5-style prompting. Happy to add other
styles.
I had a decent experience with the Vicuna-13B-CoT.Q5_K_M.gguf model,
which fits in GPU on a 12GB RTX 3060.
2023-09-24 14:49:31 -05:00
02de0b3163
Merge pull request #11 from jepler/backend-option-help
...
Add background option help; add chatgpt max-request-tokens
2023-09-24 11:01:13 -05:00
c2d801daf5
underscore is dash in backend options
2023-09-24 10:59:09 -05:00
76ac57fad1
Integrate backend options help into regular help
2023-09-24 10:56:38 -05:00
350c0a3d70
Add backend option help
2023-09-24 10:56:28 -05:00
9f5f181a3c
Add max_request_tokens for openai backend
2023-09-24 10:55:06 -05:00
a35a4d6c12
which python versions are intended to work
2023-09-23 12:06:09 -05:00
62a4a69e82
Merge pull request #10 from jepler/gpt4
...
Support backend parameters via `-B name:value`
2023-09-23 08:54:37 -05:00
e539a8568a
avoid erroring on bad sessions with null content
2023-09-23 08:52:44 -05:00
acf9bc4f03
Support backend parameters via -B name:value
...
.. and expose some for lorem & chatgpt
eventually the never-used parameters about max_query_size and timeout
will be moved to backend parameters, but the main thing was to add
the ability to access gpt4:
```
$ chap ask -b openai_chatgpt -B model:gpt-4 is gpt4 smarter than gpt3.5
is gpt4 smarter than gpt3.5
As of my knowledge at this time, GPT-4 doesn't exist. Perhaps in the future,
there will be a successor to GPT-3. Usually, each newer version of a model like
this is designed to be more powerful and sophisticated, so it would likely be
"smarter" than GPT-3. However, since it doesn't currently exist, I can't make a
definite comparison.
```
😁
2023-09-23 08:47:57 -05:00
f260221cb9
place focus back in prompt box when done generating
2023-09-23 08:47:57 -05:00
5c697b460e
use new/existing session in: cat, render, tui
2023-09-23 08:47:55 -05:00
6e7184623b
Factor out "uses_new_session", "uses_existing_session"; use in "chap ask"
2023-09-23 08:46:29 -05:00
5fbb48fcb6
Add --no-system to chap-render
2023-09-23 08:45:26 -05:00
1f554df3b0
Add "--files-with-matches" and match highlighting to chap grep
2023-09-23 08:44:33 -05:00
84f2531c9c
Add "--no-system" to chap cat to skip showing the system message
2023-09-23 08:43:38 -05:00
1c597cd049
Merge pull request #7 from jepler/issue6
...
add `chap --version`
2023-05-20 12:21:06 -05:00
4c910726b9
Merge pull request #8 from peterkaminski/add-command-descriptions
...
Add command descriptions
2023-05-20 11:51:04 -05:00
fa197ff708
Merge pull request #9 from jepler/textual021compat
...
Textual 0.21 compatibility
2023-05-20 11:50:36 -05:00
9b194dc897
Fix scrolling of main container since 0.21
2023-05-20 11:44:59 -05:00
2095b09837
run isort before pylint
2023-05-20 11:44:59 -05:00
Peter Kaminski
7211625b9f
remove spurious OS-related file
2023-04-08 09:30:09 -07:00
Peter Kaminski
6881d57477
add command descriptions
2023-04-08 09:25:17 -07:00
eab73bdee1
make it work even if not-git and no __version__.py
2023-04-06 14:37:54 -05:00
381aa0a8b6
run isort before pylint
2023-04-06 11:28:29 -05:00