Compare commits

..

No commits in common. "main" and "reduce-code-block-jank" have entirely different histories.

16 changed files with 243 additions and 533 deletions

48
.github/workflows/codeql.yml vendored Normal file
View file

@ -0,0 +1,48 @@
# SPDX-FileCopyrightText: 2022 Jeff Epler
#
# SPDX-License-Identifier: CC0-1.0
name: "CodeQL"
on:
push:
branches: [ "main" ]
pull_request:
branches: [ "main" ]
schedule:
- cron: "53 3 * * 5"
jobs:
analyze:
name: Analyze
runs-on: ubuntu-latest
permissions:
actions: read
contents: read
security-events: write
strategy:
fail-fast: false
matrix:
language: [ python ]
steps:
- name: Checkout
uses: actions/checkout@v3
- name: Install Dependencies (python)
run: pip3 install -r requirements-dev.txt
- name: Initialize CodeQL
uses: github/codeql-action/init@v2
with:
languages: ${{ matrix.language }}
queries: +security-and-quality
- name: Autobuild
uses: github/codeql-action/autobuild@v2
- name: Perform CodeQL Analysis
uses: github/codeql-action/analyze@v2
with:
category: "/language:${{ matrix.language }}"

View file

@ -18,10 +18,10 @@ jobs:
GITHUB_CONTEXT: ${{ toJson(github) }}
run: echo "$GITHUB_CONTEXT"
- uses: actions/checkout@v4
- uses: actions/checkout@v3
- name: Set up Python
uses: actions/setup-python@v5
uses: actions/setup-python@v4
with:
python-version: 3.11

View file

@ -17,10 +17,10 @@ jobs:
runs-on: ubuntu-22.04
steps:
- uses: actions/checkout@v4
- uses: actions/checkout@v3
- name: pre-commit
uses: pre-commit/action@v3.0.1
uses: pre-commit/action@v3.0.0
- name: Make patch
if: failure()
@ -28,7 +28,7 @@ jobs:
- name: Upload patch
if: failure()
uses: actions/upload-artifact@v4
uses: actions/upload-artifact@v3
with:
name: patch
path: ~/pre-commit.patch
@ -36,10 +36,10 @@ jobs:
test-release:
runs-on: ubuntu-22.04
steps:
- uses: actions/checkout@v4
- uses: actions/checkout@v3
- name: Set up Python
uses: actions/setup-python@v5
uses: actions/setup-python@v4
with:
python-version: 3.11
@ -53,7 +53,7 @@ jobs:
run: python -mbuild
- name: Upload artifacts
uses: actions/upload-artifact@v4
uses: actions/upload-artifact@v3
with:
name: dist
path: dist/*

121
LICENSES/CC0-1.0.txt Normal file
View file

@ -0,0 +1,121 @@
Creative Commons Legal Code
CC0 1.0 Universal
CREATIVE COMMONS CORPORATION IS NOT A LAW FIRM AND DOES NOT PROVIDE
LEGAL SERVICES. DISTRIBUTION OF THIS DOCUMENT DOES NOT CREATE AN
ATTORNEY-CLIENT RELATIONSHIP. CREATIVE COMMONS PROVIDES THIS
INFORMATION ON AN "AS-IS" BASIS. CREATIVE COMMONS MAKES NO WARRANTIES
REGARDING THE USE OF THIS DOCUMENT OR THE INFORMATION OR WORKS
PROVIDED HEREUNDER, AND DISCLAIMS LIABILITY FOR DAMAGES RESULTING FROM
THE USE OF THIS DOCUMENT OR THE INFORMATION OR WORKS PROVIDED
HEREUNDER.
Statement of Purpose
The laws of most jurisdictions throughout the world automatically confer
exclusive Copyright and Related Rights (defined below) upon the creator
and subsequent owner(s) (each and all, an "owner") of an original work of
authorship and/or a database (each, a "Work").
Certain owners wish to permanently relinquish those rights to a Work for
the purpose of contributing to a commons of creative, cultural and
scientific works ("Commons") that the public can reliably and without fear
of later claims of infringement build upon, modify, incorporate in other
works, reuse and redistribute as freely as possible in any form whatsoever
and for any purposes, including without limitation commercial purposes.
These owners may contribute to the Commons to promote the ideal of a free
culture and the further production of creative, cultural and scientific
works, or to gain reputation or greater distribution for their Work in
part through the use and efforts of others.
For these and/or other purposes and motivations, and without any
expectation of additional consideration or compensation, the person
associating CC0 with a Work (the "Affirmer"), to the extent that he or she
is an owner of Copyright and Related Rights in the Work, voluntarily
elects to apply CC0 to the Work and publicly distribute the Work under its
terms, with knowledge of his or her Copyright and Related Rights in the
Work and the meaning and intended legal effect of CC0 on those rights.
1. Copyright and Related Rights. A Work made available under CC0 may be
protected by copyright and related or neighboring rights ("Copyright and
Related Rights"). Copyright and Related Rights include, but are not
limited to, the following:
i. the right to reproduce, adapt, distribute, perform, display,
communicate, and translate a Work;
ii. moral rights retained by the original author(s) and/or performer(s);
iii. publicity and privacy rights pertaining to a person's image or
likeness depicted in a Work;
iv. rights protecting against unfair competition in regards to a Work,
subject to the limitations in paragraph 4(a), below;
v. rights protecting the extraction, dissemination, use and reuse of data
in a Work;
vi. database rights (such as those arising under Directive 96/9/EC of the
European Parliament and of the Council of 11 March 1996 on the legal
protection of databases, and under any national implementation
thereof, including any amended or successor version of such
directive); and
vii. other similar, equivalent or corresponding rights throughout the
world based on applicable law or treaty, and any national
implementations thereof.
2. Waiver. To the greatest extent permitted by, but not in contravention
of, applicable law, Affirmer hereby overtly, fully, permanently,
irrevocably and unconditionally waives, abandons, and surrenders all of
Affirmer's Copyright and Related Rights and associated claims and causes
of action, whether now known or unknown (including existing as well as
future claims and causes of action), in the Work (i) in all territories
worldwide, (ii) for the maximum duration provided by applicable law or
treaty (including future time extensions), (iii) in any current or future
medium and for any number of copies, and (iv) for any purpose whatsoever,
including without limitation commercial, advertising or promotional
purposes (the "Waiver"). Affirmer makes the Waiver for the benefit of each
member of the public at large and to the detriment of Affirmer's heirs and
successors, fully intending that such Waiver shall not be subject to
revocation, rescission, cancellation, termination, or any other legal or
equitable action to disrupt the quiet enjoyment of the Work by the public
as contemplated by Affirmer's express Statement of Purpose.
3. Public License Fallback. Should any part of the Waiver for any reason
be judged legally invalid or ineffective under applicable law, then the
Waiver shall be preserved to the maximum extent permitted taking into
account Affirmer's express Statement of Purpose. In addition, to the
extent the Waiver is so judged Affirmer hereby grants to each affected
person a royalty-free, non transferable, non sublicensable, non exclusive,
irrevocable and unconditional license to exercise Affirmer's Copyright and
Related Rights in the Work (i) in all territories worldwide, (ii) for the
maximum duration provided by applicable law or treaty (including future
time extensions), (iii) in any current or future medium and for any number
of copies, and (iv) for any purpose whatsoever, including without
limitation commercial, advertising or promotional purposes (the
"License"). The License shall be deemed effective as of the date CC0 was
applied by Affirmer to the Work. Should any part of the License for any
reason be judged legally invalid or ineffective under applicable law, such
partial invalidity or ineffectiveness shall not invalidate the remainder
of the License, and in such case Affirmer hereby affirms that he or she
will not (i) exercise any of his or her remaining Copyright and Related
Rights in the Work or (ii) assert any associated claims and causes of
action with respect to the Work, in either case contrary to Affirmer's
express Statement of Purpose.
4. Limitations and Disclaimers.
a. No trademark or patent rights held by Affirmer are waived, abandoned,
surrendered, licensed or otherwise affected by this document.
b. Affirmer offers the Work as-is and makes no representations or
warranties of any kind concerning the Work, express, implied,
statutory or otherwise, including without limitation warranties of
title, merchantability, fitness for a particular purpose, non
infringement, or the absence of latent or other defects, accuracy, or
the present or absence of errors, whether or not discoverable, all to
the greatest extent permissible under applicable law.
c. Affirmer disclaims responsibility for clearing rights of other persons
that may apply to the Work or any use thereof, including without
limitation any person's Copyright and Related Rights in the Work.
Further, Affirmer disclaims responsibility for obtaining any necessary
consents, permissions or other rights required for any use of the
Work.
d. Affirmer understands and acknowledges that Creative Commons is not a
party to this document and has no duty or obligation with respect to
this CC0 or use of the Work.

View file

@ -81,31 +81,6 @@ Put your OpenAI API key in the platform configuration directory for chap, e.g.,
* `chap grep needle`
## `@FILE` arguments
It's useful to set a bunch of related arguments together, for instance to fully
configure a back-end. This functionality is implemented via `@FILE` arguments.
Before any other command-line argument parsing is performed, `@FILE` arguments are expanded:
* An `@FILE` argument is searched relative to the current directory
* An `@:FILE` argument is searched relative to the configuration directory (e.g., $HOME/.config/chap/presets)
* If an argument starts with a literal `@`, double it: `@@`
* `@.` stops processing any further `@FILE` arguments and leaves them unchanged.
The contents of an `@FILE` are parsed according to `shlex.split(comments=True)`.
Comments are supported.
A typical content might look like this:
```
# cfg/gpt-4o: Use more expensive gpt 4o and custom prompt
--backend openai-chatgpt
-B model:gpt-4o
-s :my-custom-system-message.txt
```
and you might use it with
```
chap @:cfg/gpt-4o ask what version of gpt is this
```
## Interactive terminal usage
The interactive terminal mode is accessed via `chap tui`.
@ -144,26 +119,21 @@ an existing session with `-s`. Or, you can continue the last session with
You can set the "system message" with the `-S` flag.
You can select the text generating backend with the `-b` flag:
* openai-chatgpt: the default, paid API, best quality results. Also works with compatible API implementations including llama-cpp when the correct backend URL is specified.
* openai-chatgpt: the default, paid API, best quality results
* llama-cpp: Works with [llama.cpp's http server](https://github.com/ggerganov/llama.cpp/blob/master/examples/server/README.md) and can run locally with various models,
though it is [optimized for models that use the llama2-style prompting](https://huggingface.co/blog/llama2#how-to-prompt-llama-2). Consider using llama.cpp's OpenAI compatible API with the openai-chatgpt backend instead, in which case the server can apply the chat template.
though it is [optimized for models that use the llama2-style prompting](https://huggingface.co/blog/llama2#how-to-prompt-llama-2).
Set the server URL with `-B url:...`.
* textgen: Works with https://github.com/oobabooga/text-generation-webui and can run locally with various models.
Needs the server URL in *$configuration_directory/textgen\_url*.
* mistral: Works with the [mistral paid API](https://docs.mistral.ai/).
* anthropic: Works with the [anthropic paid API](https://docs.anthropic.com/en/home).
* huggingface: Works with the [huggingface API](https://huggingface.co/docs/api-inference/index), which includes a free tier.
* lorem: local non-AI lorem generator for testing
Backends have settings such as URLs and where API keys are stored. use `chap --backend
<BACKEND> --help` to list settings for a particular backend.
## Environment variables
The backend can be set with the `CHAP_BACKEND` environment variable.
Backend settings can be set with `CHAP_<backend_name>_<parameter_name>`, with `backend_name` and `parameter_name` all in caps.
For instance, `CHAP_LLAMA_CPP_URL=http://server.local:8080/completion` changes the default server URL for the llama-cpp backend.
For instance, `CHAP_LLAMA_CPP_URL=http://server.local:8080/completion` changes the default server URL for the llama-cpp back-end.
## Importing from ChatGPT

View file

@ -8,6 +8,6 @@ lorem-text
platformdirs
pyperclip
simple_parsing
textual[syntax] >= 4
textual[syntax]
tiktoken
websockets

View file

@ -1,95 +0,0 @@
# SPDX-FileCopyrightText: 2024 Jeff Epler <jepler@gmail.com>
#
# SPDX-License-Identifier: MIT
import json
from dataclasses import dataclass
from typing import AsyncGenerator, Any
import httpx
from ..core import AutoAskMixin, Backend
from ..key import UsesKeyMixin
from ..session import Assistant, Role, Session, User
class Anthropic(AutoAskMixin, UsesKeyMixin):
@dataclass
class Parameters:
url: str = "https://api.anthropic.com"
model: str = "claude-3-5-sonnet-20240620"
max_new_tokens: int = 1000
api_key_name = "anthropic_api_key"
def __init__(self) -> None:
super().__init__()
self.parameters = self.Parameters()
system_message = """\
Answer each question accurately and thoroughly.
"""
def make_full_query(self, messages: Session, max_query_size: int) -> dict[str, Any]:
system = [m.content for m in messages if m.role == Role.SYSTEM]
messages = [m for m in messages if m.role != Role.SYSTEM and m.content]
del messages[:-max_query_size]
result = dict(
model=self.parameters.model,
max_tokens=self.parameters.max_new_tokens,
messages=[dict(role=str(m.role), content=m.content) for m in messages],
stream=True,
)
if system and system[0]:
result["system"] = system[0]
return result
async def aask(
self,
session: Session,
query: str,
*,
max_query_size: int = 5,
timeout: float = 180,
) -> AsyncGenerator[str, None]:
new_content: list[str] = []
params = self.make_full_query(session + [User(query)], max_query_size)
try:
async with httpx.AsyncClient(timeout=timeout) as client:
async with client.stream(
"POST",
f"{self.parameters.url}/v1/messages",
json=params,
headers={
"x-api-key": self.get_key(),
"content-type": "application/json",
"anthropic-version": "2023-06-01",
"anthropic-beta": "messages-2023-12-15",
},
) as response:
if response.status_code == 200:
async for line in response.aiter_lines():
if line.startswith("data:"):
data = line.removeprefix("data:").strip()
j = json.loads(data)
content = j.get("delta", {}).get("text", "")
if content:
new_content.append(content)
yield content
else:
content = f"\nFailed with {response=!r}"
new_content.append(content)
yield content
async for line in response.aiter_lines():
new_content.append(line)
yield line
except httpx.HTTPError as e:
content = f"\nException: {e!r}"
new_content.append(content)
yield content
session.extend([User(query), Assistant("".join(new_content))])
def factory() -> Backend:
"""Uses the anthropic text-generation-interface web API"""
return Anthropic()

View file

@ -9,11 +9,11 @@ from typing import Any, AsyncGenerator
import httpx
from ..core import AutoAskMixin, Backend
from ..key import UsesKeyMixin
from ..key import get_key
from ..session import Assistant, Role, Session, User
class HuggingFace(AutoAskMixin, UsesKeyMixin):
class HuggingFace(AutoAskMixin):
@dataclass
class Parameters:
url: str = "https://api-inference.huggingface.co"
@ -24,7 +24,6 @@ class HuggingFace(AutoAskMixin, UsesKeyMixin):
after_user: str = """ [/INST] """
after_assistant: str = """ </s><s>[INST] """
stop_token_id = 2
api_key_name = "huggingface_api_token"
def __init__(self) -> None:
super().__init__()
@ -111,6 +110,10 @@ A dialog, where USER interacts with AI. AI is helpful, kind, obedient, honest, a
session.extend([User(query), Assistant("".join(new_content))])
@classmethod
def get_key(cls) -> str:
return get_key("huggingface_api_token")
def factory() -> Backend:
"""Uses the huggingface text-generation-interface web API"""

View file

@ -18,16 +18,10 @@ class LlamaCpp(AutoAskMixin):
url: str = "http://localhost:8080/completion"
"""The URL of a llama.cpp server's completion endpoint."""
start_prompt: str = "<|begin_of_text|>"
system_format: str = (
"<|start_header_id|>system<|end_header_id|>\n\n{}<|eot_id|>"
)
user_format: str = "<|start_header_id|>user<|end_header_id|>\n\n{}<|eot_id|>"
assistant_format: str = (
"<|start_header_id|>assistant<|end_header_id|>\n\n{}<|eot_id|>"
)
end_prompt: str = "<|start_header_id|>assistant<|end_header_id|>\n\n"
stop: str | None = None
start_prompt: str = """<s>[INST] <<SYS>>\n"""
after_system: str = "\n<</SYS>>\n\n"
after_user: str = """ [/INST] """
after_assistant: str = """ </s><s>[INST] """
def __init__(self) -> None:
super().__init__()
@ -40,16 +34,17 @@ A dialog, where USER interacts with AI. AI is helpful, kind, obedient, honest, a
def make_full_query(self, messages: Session, max_query_size: int) -> str:
del messages[1:-max_query_size]
result = [self.parameters.start_prompt]
formats = {
Role.SYSTEM: self.parameters.system_format,
Role.USER: self.parameters.user_format,
Role.ASSISTANT: self.parameters.assistant_format,
}
for m in messages:
content = (m.content or "").strip()
if not content:
continue
result.append(formats[m.role].format(content))
result.append(content)
if m.role == Role.SYSTEM:
result.append(self.parameters.after_system)
elif m.role == Role.ASSISTANT:
result.append(self.parameters.after_assistant)
elif m.role == Role.USER:
result.append(self.parameters.after_user)
full_query = "".join(result)
return full_query
@ -64,7 +59,7 @@ A dialog, where USER interacts with AI. AI is helpful, kind, obedient, honest, a
params = {
"prompt": self.make_full_query(session + [User(query)], max_query_size),
"stream": True,
"stop": ["</s>", "<s>", "[INST]", "<|eot_id|>"],
"stop": ["</s>", "<s>", "[INST]"],
}
new_content: list[str] = []
try:
@ -101,10 +96,5 @@ A dialog, where USER interacts with AI. AI is helpful, kind, obedient, honest, a
def factory() -> Backend:
"""Uses the llama.cpp completion web API
Note: Consider using the openai-chatgpt backend with a custom URL instead.
The llama.cpp server will automatically apply common chat templates with the
openai-chatgpt backend, while chat templates must be manually configured client side
with this backend."""
"""Uses the llama.cpp completion web API"""
return LlamaCpp()

View file

@ -1,96 +0,0 @@
# SPDX-FileCopyrightText: 2024 Jeff Epler <jepler@gmail.com>
#
# SPDX-License-Identifier: MIT
import json
from dataclasses import dataclass
from typing import AsyncGenerator, Any
import httpx
from ..core import AutoAskMixin
from ..key import UsesKeyMixin
from ..session import Assistant, Session, User
class Mistral(AutoAskMixin, UsesKeyMixin):
@dataclass
class Parameters:
url: str = "https://api.mistral.ai"
model: str = "open-mistral-7b"
max_new_tokens: int = 1000
api_key_name = "mistral_api_key"
def __init__(self) -> None:
super().__init__()
self.parameters = self.Parameters()
system_message = """\
Answer each question accurately and thoroughly.
"""
def make_full_query(self, messages: Session, max_query_size: int) -> dict[str, Any]:
messages = [m for m in messages if m.content]
del messages[1:-max_query_size]
result = dict(
model=self.parameters.model,
max_tokens=self.parameters.max_new_tokens,
messages=[dict(role=str(m.role), content=m.content) for m in messages],
stream=True,
)
return result
async def aask(
self,
session: Session,
query: str,
*,
max_query_size: int = 5,
timeout: float = 180,
) -> AsyncGenerator[str, None]:
new_content: list[str] = []
params = self.make_full_query(session + [User(query)], max_query_size)
try:
async with httpx.AsyncClient(timeout=timeout) as client:
async with client.stream(
"POST",
f"{self.parameters.url}/v1/chat/completions",
json=params,
headers={
"Authorization": f"Bearer {self.get_key()}",
"content-type": "application/json",
"accept": "application/json",
"model": "application/json",
},
) as response:
if response.status_code == 200:
async for line in response.aiter_lines():
if line.startswith("data:"):
data = line.removeprefix("data:").strip()
if data == "[DONE]":
break
j = json.loads(data)
content = (
j.get("choices", [{}])[0]
.get("delta", {})
.get("content", "")
)
if content:
new_content.append(content)
yield content
else:
content = f"\nFailed with {response=!r}"
new_content.append(content)
yield content
async for line in response.aiter_lines():
new_content.append(line)
yield line
except httpx.HTTPError as e:
content = f"\nException: {e!r}"
new_content.append(content)
yield content
session.extend([User(query), Assistant("".join(new_content))])
factory = Mistral

View file

@ -12,7 +12,7 @@ import httpx
import tiktoken
from ..core import Backend
from ..key import UsesKeyMixin
from ..key import get_key
from ..session import Assistant, Message, Session, User, session_to_list
@ -63,29 +63,15 @@ class EncodingMeta:
return cls(encoding, tokens_per_message, tokens_per_name, tokens_overhead)
class ChatGPT(UsesKeyMixin):
class ChatGPT:
@dataclass
class Parameters:
model: str = "gpt-4o-mini"
"""The model to use. The most common alternative value is 'gpt-4o'."""
model: str = "gpt-3.5-turbo"
"""The model to use. The most common alternative value is 'gpt-4'."""
max_request_tokens: int = 1024
"""The approximate greatest number of tokens to send in a request. When the session is long, the system prompt and 1 or more of the most recent interaction steps are sent."""
url: str = "https://api.openai.com/v1/chat/completions"
"""The URL of a chatgpt-compatible server's completion endpoint. Notably, llama.cpp's server is compatible with this backend, and can automatically apply common chat templates too."""
temperature: float | None = None
"""The model temperature for sampling"""
top_p: float | None = None
"""The model temperature for sampling"""
api_key_name: str = "openai_api_key"
"""The OpenAI API key"""
parameters: Parameters
def __init__(self) -> None:
self.parameters = self.Parameters()
@ -111,7 +97,7 @@ class ChatGPT(UsesKeyMixin):
def ask(self, session: Session, query: str, *, timeout: float = 60) -> str:
full_prompt = self.make_full_prompt(session + [User(query)])
response = httpx.post(
self.parameters.url,
"https://api.openai.com/v1/chat/completions",
json={
"model": self.parameters.model,
"messages": session_to_list(full_prompt),
@ -142,12 +128,10 @@ class ChatGPT(UsesKeyMixin):
async with httpx.AsyncClient(timeout=timeout) as client:
async with client.stream(
"POST",
self.parameters.url,
"https://api.openai.com/v1/chat/completions",
headers={"authorization": f"Bearer {self.get_key()}"},
json={
"model": self.parameters.model,
"temperature": self.parameters.temperature,
"top_p": self.parameters.top_p,
"stream": True,
"messages": session_to_list(full_prompt),
},
@ -176,6 +160,10 @@ class ChatGPT(UsesKeyMixin):
session.extend([User(query), Assistant("".join(new_content))])
@classmethod
def get_key(cls) -> str:
return get_key("openai_api_key")
def factory() -> Backend:
"""Uses the OpenAI chat completion API"""

View file

@ -100,9 +100,8 @@ def verbose_ask(api: Backend, session: Session, q: str, print_prompt: bool) -> s
@command_uses_new_session
@click.option("--print-prompt/--no-print-prompt", default=True)
@click.option("--stdin/--no-stdin", "use_stdin", default=False)
@click.argument("prompt", nargs=-1)
def main(obj: Obj, prompt: list[str], use_stdin: bool, print_prompt: bool) -> None:
@click.argument("prompt", nargs=-1, required=True)
def main(obj: Obj, prompt: str, print_prompt: bool) -> None:
"""Ask a question (command-line argument is passed as prompt)"""
session = obj.session
assert session is not None
@ -113,16 +112,9 @@ def main(obj: Obj, prompt: list[str], use_stdin: bool, print_prompt: bool) -> No
api = obj.api
assert api is not None
if use_stdin:
if prompt:
raise click.UsageError("Can't use 'prompt' together with --stdin")
joined_prompt = sys.stdin.read()
else:
joined_prompt = " ".join(prompt)
# symlink_session_filename(session_filename)
response = verbose_ask(api, session, joined_prompt, print_prompt=print_prompt)
response = verbose_ask(api, session, " ".join(prompt), print_prompt=print_prompt)
print(f"Saving session to {session_filename}", file=sys.stderr)
if response is not None:

View file

@ -58,4 +58,4 @@ Markdown {
margin: 0 1 0 0;
}
SubmittableTextArea { height: auto; min-height: 5; margin: 0; border: none; border-left: heavy $primary }
SubmittableTextArea { height: 3 }

View file

@ -38,7 +38,7 @@ ANSI_SEQUENCES_KEYS["\x1b\n"] = (Keys.F9,) # type: ignore
class SubmittableTextArea(TextArea):
BINDINGS = [
Binding("f9", "app.submit", "Submit", show=True),
Binding("f9", "submit", "Submit", show=True),
Binding("tab", "focus_next", show=False, priority=True), # no inserting tabs
]
@ -51,10 +51,10 @@ def parser_factory() -> MarkdownIt:
class ChapMarkdown(Markdown, can_focus=True, can_focus_children=False):
BINDINGS = [
Binding("ctrl+c", "app.yank", "Copy text", show=True),
Binding("ctrl+r", "app.resubmit", "resubmit", show=True),
Binding("ctrl+x", "app.redraft", "redraft", show=True),
Binding("ctrl+q", "app.toggle_history", "history toggle", show=True),
Binding("ctrl+y", "yank", "Yank text", show=True),
Binding("ctrl+r", "resubmit", "resubmit", show=True),
Binding("ctrl+x", "redraft", "redraft", show=True),
Binding("ctrl+q", "toggle_history", "history toggle", show=True),
]
@ -75,7 +75,7 @@ class CancelButton(Button):
class Tui(App[None]):
CSS_PATH = "tui.css"
BINDINGS = [
Binding("ctrl+q", "quit", "Quit", show=True, priority=True),
Binding("ctrl+c", "quit", "Quit", show=True, priority=True),
]
def __init__(
@ -145,6 +145,7 @@ class Tui(App[None]):
await self.container.mount_all(
[markdown_for_step(User(query)), output], before="#pad"
)
tokens: list[str] = []
update: asyncio.Queue[bool] = asyncio.Queue(1)
for markdown in self.container.children:
@ -165,22 +166,15 @@ class Tui(App[None]):
)
async def render_fun() -> None:
old_len = 0
while await update.get():
content = message.content
new_len = len(content)
new_content = content[old_len:new_len]
if new_content:
if old_len:
await output.append(new_content)
else:
output.update(content)
self.container.scroll_end()
old_len = new_len
await asyncio.sleep(0.01)
if tokens:
output.update("".join(tokens).strip())
self.container.scroll_end()
await asyncio.sleep(0.1)
async def get_token_fun() -> None:
async for token in self.api.aask(session, query):
tokens.append(token)
message.content += token
try:
update.put_nowait(True)
@ -298,9 +292,7 @@ def main(obj: Obj, replace_system_prompt: bool) -> None:
assert session_filename is not None
if replace_system_prompt:
session[0].content = (
api.system_message if obj.system_message is None else obj.system_message
)
session[0].content = obj.system_message or api.system_message
tui = Tui(api, session)
tui.run()

View file

@ -3,17 +3,13 @@
# SPDX-License-Identifier: MIT
from collections.abc import Sequence
import asyncio
import datetime
import io
import importlib
import os
import pathlib
import pkgutil
import subprocess
import shlex
import textwrap
from dataclasses import MISSING, Field, dataclass, fields
from typing import (
Any,
@ -21,7 +17,6 @@ from typing import (
Callable,
Optional,
Union,
IO,
cast,
get_origin,
get_args,
@ -32,6 +27,7 @@ import click
import platformdirs
from simple_parsing.docstring import get_attribute_docstring
from typing_extensions import Protocol
from . import backends, commands
from .session import Message, Session, System, session_from_file
@ -43,8 +39,6 @@ else:
conversations_path = platformdirs.user_state_path("chap") / "conversations"
conversations_path.mkdir(parents=True, exist_ok=True)
configuration_path = platformdirs.user_config_path("chap")
preset_path = configuration_path / "preset"
class ABackend(Protocol):
@ -131,11 +125,7 @@ def configure_api_from_environment(
setattr(api.parameters, field.name, tv)
def get_api(ctx: click.Context | None = None, name: str | None = None) -> Backend:
if ctx is None:
ctx = click.Context(click.Command("chap"))
if name is None:
name = os.environ.get("CHAP_BACKEND", "openai_chatgpt")
def get_api(ctx: click.Context, name: str = "openai_chatgpt") -> Backend:
name = name.replace("-", "_")
backend = cast(
Backend, importlib.import_module(f"{__package__}.backends.{name}").factory()
@ -187,20 +177,12 @@ def colonstr(arg: str) -> tuple[str, str]:
def set_system_message(ctx: click.Context, param: click.Parameter, value: str) -> None:
if value is None:
return
if value and value.startswith("@"):
with open(value[1:], "r", encoding="utf-8") as f:
value = f.read().rstrip()
ctx.obj.system_message = value
def set_system_message_from_file(
ctx: click.Context, param: click.Parameter, value: io.TextIOWrapper
) -> None:
if value is None:
return
content = value.read().strip()
ctx.obj.system_message = content
def set_backend(ctx: click.Context, param: click.Parameter, value: str) -> None:
if value == "list":
formatter = ctx.make_formatter()
@ -339,43 +321,7 @@ class Obj:
session_filename: Optional[pathlib.Path] = None
def maybe_add_txt_extension(fn: pathlib.Path) -> pathlib.Path:
if not fn.exists():
fn1 = pathlib.Path(str(fn) + ".txt")
if fn1.exists():
fn = fn1
return fn
def expand_splats(args: list[str]) -> list[str]:
result = []
saw_at_dot = False
for a in args:
if a == "@.":
saw_at_dot = True
continue
if saw_at_dot:
result.append(a)
continue
if a.startswith("@@"): ## double @ to escape an argument that starts with @
result.append(a[1:])
continue
if not a.startswith("@"):
result.append(a)
continue
if a.startswith("@:"):
fn: pathlib.Path = preset_path / a[2:]
else:
fn = pathlib.Path(a[1:])
fn = maybe_add_txt_extension(fn)
with open(fn, "r", encoding="utf-8") as f:
content = f.read()
parts = shlex.split(content)
result.extend(expand_splats(parts))
return result
class MyCLI(click.Group):
class MyCLI(click.MultiCommand):
def make_context(
self,
info_name: Optional[str],
@ -404,117 +350,14 @@ class MyCLI(click.Group):
except ModuleNotFoundError as exc:
raise click.UsageError(f"Invalid subcommand {cmd_name!r}", ctx) from exc
def gather_preset_info(self) -> list[tuple[str, str]]:
result = []
for p in preset_path.glob("*"):
if p.is_file():
with p.open() as f:
first_line = f.readline()
if first_line.startswith("#"):
help_str = first_line[1:].strip()
else:
help_str = "(A comment on the first line would be shown here)"
result.append((f"@:{p.name}", help_str))
return result
def format_splat_options(
self, ctx: click.Context, formatter: click.HelpFormatter
) -> None:
with formatter.section("Splats"):
formatter.write_text(
"Before any other command-line argument parsing is performed, @FILE arguments are expanded:"
)
formatter.write_paragraph()
formatter.indent()
formatter.write_dl(
[
("@FILE", "Argument is searched relative to the current directory"),
(
"@:FILE",
"Argument is searched relative to the configuration directory (e.g., $HOME/.config/chap/preset)",
),
("@@…", "If an argument starts with a literal '@', double it"),
(
"@.",
"Stops processing any further `@FILE` arguments and leaves them unchanged.",
),
]
)
formatter.dedent()
formatter.write_paragraph()
formatter.write_text(
textwrap.dedent(
"""\
The contents of an `@FILE` are parsed by `shlex.split(comments=True)`.
Comments are supported. If the filename ends in .txt,
the extension may be omitted."""
)
)
formatter.write_paragraph()
if preset_info := self.gather_preset_info():
formatter.write_text("Presets found:")
formatter.write_paragraph()
formatter.indent()
formatter.write_dl(preset_info)
formatter.dedent()
def format_options(
self, ctx: click.Context, formatter: click.HelpFormatter
) -> None:
self.format_splat_options(ctx, formatter)
super().format_options(ctx, formatter)
api = ctx.obj.api or get_api(ctx)
if hasattr(api, "parameters"):
format_backend_help(api, formatter)
def main(
self,
args: Sequence[str] | None = None,
prog_name: str | None = None,
complete_var: str | None = None,
standalone_mode: bool = True,
windows_expand_args: bool = True,
**extra: Any,
) -> Any:
if args is None:
args = sys.argv[1:]
if os.name == "nt" and windows_expand_args:
args = click.utils._expand_args(args)
else:
args = list(args)
args = expand_splats(args)
return super().main(
args,
prog_name=prog_name,
complete_var=complete_var,
standalone_mode=standalone_mode,
windows_expand_args=windows_expand_args,
**extra,
)
class ConfigRelativeFile(click.File):
def __init__(self, mode: str, where: str) -> None:
super().__init__(mode)
self.where = where
def convert(
self,
value: str | os.PathLike[str] | IO[Any],
param: click.Parameter | None,
ctx: click.Context | None,
) -> Any:
if isinstance(value, str):
if value.startswith(":"):
value = configuration_path / self.where / value[1:]
else:
value = pathlib.Path(value)
if isinstance(value, pathlib.Path):
value = maybe_add_txt_extension(value)
return super().convert(value, param, ctx)
main = MyCLI(
help="Commandline interface to ChatGPT",
@ -526,14 +369,6 @@ main = MyCLI(
help="Show the version and exit",
callback=version_callback,
),
click.Option(
("--system-message-file", "-s"),
type=ConfigRelativeFile("r", where="prompt"),
default=None,
callback=set_system_message_from_file,
expose_value=False,
help=f"Set the system message from a file. If the filename starts with `:` it is relative to the {configuration_path}/prompt. If the filename ends in .txt, the extension may be omitted.",
),
click.Option(
("--system-message", "-S"),
type=str,

View file

@ -2,63 +2,25 @@
#
# SPDX-License-Identifier: MIT
import json
import subprocess
from typing import Protocol
import functools
import platformdirs
class APIKeyProtocol(Protocol):
@property
def api_key_name(self) -> str:
...
class HasKeyProtocol(Protocol):
@property
def parameters(self) -> APIKeyProtocol:
...
class UsesKeyMixin:
def get_key(self: HasKeyProtocol) -> str:
return get_key(self.parameters.api_key_name)
class NoKeyAvailable(Exception):
pass
_key_path_base = platformdirs.user_config_path("chap")
USE_PASSWORD_STORE = _key_path_base / "USE_PASSWORD_STORE"
if USE_PASSWORD_STORE.exists():
content = USE_PASSWORD_STORE.read_text(encoding="utf-8")
if content.strip():
cfg = json.loads(content)
pass_command: list[str] = cfg.get("PASS_COMMAND", ["pass", "show"])
pass_prefix: str = cfg.get("PASS_PREFIX", "chap/")
@functools.cache
def get_key(name: str, what: str = "openai api key") -> str:
key_path = _key_path_base / name
if not key_path.exists():
raise NoKeyAvailable(
f"Place your {what} in {key_path} and run the program again"
)
@functools.cache
def get_key(name: str, what: str = "api key") -> str:
if name == "-":
return "-"
key_path = f"{pass_prefix}{name}"
command = pass_command + [key_path]
return subprocess.check_output(command, encoding="utf-8").split("\n")[0]
else:
@functools.cache
def get_key(name: str, what: str = "api key") -> str:
key_path = _key_path_base / name
if not key_path.exists():
raise NoKeyAvailable(
f"Place your {what} in {key_path} and run the program again"
)
with open(key_path, encoding="utf-8") as f:
return f.read().strip()
with open(key_path, encoding="utf-8") as f:
return f.read().strip()