Wurst.Wiki

The Official Wurst Client Wiki

User Tools

Site Tools


autocomplete

AutoComplete

Hack Summary
NameAutoComplete
Screenshot
Categorychat
In-game description

“Auto-completes your chat messages using large language models. Requires an OpenAI account with API access or any other language model API that is OpenAI-compatible.”

Default keybindnone
Source codehttps://github.com/Wurst-Imperium/Wurst7/blob/master/src/main/java/net/wurstclient/hacks/AutoCompleteHack.java, https://github.com/Wurst-Imperium/Wurst7/blob/master/src/main/java/net/wurstclient/hacks/autocomplete

AutoComplete is a Minecraft hack that generates auto-completions for the user's chat messages, using large language models like GPT-3, GPT-4 and LLaMA.

Settings

OpenAI model

OpenAI model
TypeEnum
In-game description“The model to use for OpenAI API calls.

GPT-4o-2024-08-06 is one of the smartest models at the time of writing and will often produce the best completions. However, it's meant to be an assistant rather than an auto-completion system, so you will see it produce some odd completions at times.

GPT-3.5-Turbo-Instruct is an older, non-chat model based on GPT-3.5 that works well for auto-completion tasks.”
Default valuegpt-4o-2024-08-06
Possible valuesgpt-4o-2024-08-06, gpt-4o-2024-05-13, gpt-4o-mini-2024-07-18, gpt-4-turbo-2024-04-09, gpt-4-0125-preview, gpt-4-1106-preview, gpt-4-0613, gpt-3.5-turbo-0125, gpt-3.5-turbo-1106, gpt-3.5-turbo-instruct, davinci-002, babbage-002

Max tokens

Max tokens
TypeSlider
In-game description“The maximum number of tokens that the model can generate.

Higher values allow the model to predict longer chat messages, but also increase the time it takes to generate predictions.

The default value of 16 is fine for most use cases.”
Default value16 tokens
Minimum1 token
Maximum100 tokens
Increment1 token

Temperature

Temperature
TypeSlider
In-game description“Controls the model's creativity and randomness. A higher value will result in more creative and sometimes nonsensical completions, while a lower value will result in more boring completions.”
Default value1
Minimum0
Maximum2
Increment0.01

Note: Temperature values above 1 will cause most language models to generate complete nonsense and should only be used for comedic effect.

Top P

Top P
TypeSlider
In-game description“An alternative to temperature. Makes the model less random by only letting it choose from the most likely tokens.

A value of 100% disables this feature by letting the model choose from all tokens.”
Default value100%
Minimum0%
Maximum100%
Increment1%

Presence penalty

Presence penalty
TypeSlider
In-game description“Penalty for choosing tokens that already appear in the chat history.

Positive values encourage the model to use synonyms and talk about different topics. Negative values encourage the model to repeat the same word over and over again.”
Default value0
Minimum-2
Maximum2
Increment0.01

Frequency penalty

Frequency penalty
TypeSlider
In-game description“Similar to presence penalty, but based on how often the token appears in the chat history.

Positive values encourage the model to use synonyms and talk about different topics. Negative values encourage the model to repeat existing chat messages.”
Default value0
Minimum-2
Maximum2
Increment0.01

Stop sequence

Stop sequence
TypeEnum
In-game description“Controls how AutoComplete detects the end of a chat message.

Line Break is the default value and is recommended for most language models.

Next Message works better with certain code-optimized language models, which have a tendency to insert line breaks in the middle of a chat message.”
Default valueLine Break
Possible valuesLine Break, Next Message

Note: “certain code-optimized language models” is a reference to OpenAI's code-davinci-002 model, which worked much better when using the “Next Message” option and is unfortunately no longer available. It's possible that open source code models like StarCoder will see a similar improvement when using the “Next Message” option.

Context length

Context length
TypeSlider
In-game description“Controls how many messages from the chat history are used to generate predictions.

Higher values improve the quality of predictions, but also increase the time it takes to generate them, as well as cost (for APIs like OpenAI) or RAM usage (for self-hosted models).”
Default value10 messages
Minimum0 (unlimited)
Maximum100 messages
Increment1 message

Filter server messages

Filter server messages
TypeCheckbox
In-game description“Only shows player-made chat messages to the model.

This can help you save tokens and get more out of a low context length, but it also means that the model will have no idea about events like players joining, leaving, dying, etc.”
Default valuenot checked

Custom model

Custom model
TypeTextField
In-game description“If set, this model will be used instead of the one specified in the \”OpenAI model\“ setting.

Use this if you have a fine-tuned OpenAI model or if you are using a custom endpoint that is OpenAI-compatible but offers different models.”
Default value(empty)

Custom model type

Custom model type
TypeEnum
In-game description“Whether the custom model should use the chat endpoint or the legacy endpoint.

If \”Custom model\“ is left blank, this setting is ignored.”
Default valueChat
Possible valuesChat, Legacy

OpenAI chat endpoint

OpenAI chat endpoint
TypeTextField
In-game description“Endpoint for OpenAI's chat completion API.”
Default valuehttps://api.openai.com/v1/chat/completions

The “OpenAI chat endpoint” setting allows the user to use OpenAI's chat completion API through a proxy. This is necessary in some countries where OpenAI's APIs are banned.

It may also be useful for Microsoft Azure customers who have their own endpoint, but this has not been tested yet. There are subtle differences in the Azure version of the API, so it's possible that it won't work with AutoComplete.

OpenAI legacy endpoint

OpenAI legacy endpoint
TypeTextField
In-game description“Endpoint for OpenAI's legacy completion API.”
Default valuehttps://api.openai.com/v1/completions

Max suggestions per draft

Max suggestions per draft
TypeSlider
In-game description“How many suggestions the AI is allowed to generate for the same draft message.”
Default value3
Minimum1
Maximum10
Increment1

The “Max suggestions per draft” setting controls how many different suggestions the AI will try to generate for the same draft message. Higher values will result in more suggestions, but will also use up more tokens and be more expensive for OpenAI API users. This setting can be useful for exploring different response options.

Setting “Max suggestions per draft” to a higher value than “Max suggestions shown” is usually not a good idea, as there will be no way to see the additional suggestions.

Max suggestions kept

Max suggestions kept
TypeSlider
In-game description“Maximum number of suggestions kept in memory.”
Default value100 messages
Minimum10 messages
Maximum1000 messages
Increment10 messages

The “Max suggestions kept” setting only controls at what point old suggestions are deleted from memory. Higher values don't use any additional tokens and only consume a tiny amount of RAM. This is why the range of values is so much higher than for the other settings.

Max suggestions shown

Max suggestions shown
TypeSlider
In-game description“How many suggestions can be shown above the chat box.

If this is set too high, the suggestions will obscure some of the existing chat messages. How high you can set this depends on your screen resolution and GUI scale.”
Default value5
Minimum1
Maximum10
Increment1

The “Max suggestions shown” setting controls how many suggestions can be shown at once on the screen. Depending on the user's screen resolution and GUI scale, higher values may cause the suggestions to cover up other parts of the UI.

Setting “Max suggestions per draft” to a higher value than “Max suggestions shown” is usually not a good idea, as there will be no way to see the additional suggestions.

Changes

    • Added AutoComplete.
    • Fixed the description of AutoComplete's "Max tokens" setting.
    • Added "OpenAI chat endpoint", "OpenAI legacy endpoint" and "Oobabooga endpoint" settings to AutoComplete.
    • AutoComplete now supports all of OpenAI's currently available language models.
    • Added support for gpt-3.5-turbo-1106 and gpt-4-1106-preview models to AutoComplete.
    • Updated the list of OpenAI models in AutoComplete.
    • Added "Custom model" and "Custom model type" settings to AutoComplete.
    • AutoComplete's "Max suggestions per draft" setting now generates all the suggestions at once.
    • AutoComplete's "Temperature" setting now defaults to 1, and "Frequency penalty" defaults to 0.
    • Removed the "API provider", "Repetition penalty", and "Encoder repetition penalty" settings from AutoComplete.
    • Updated AutoComplete's list of OpenAI models.
autocomplete.txt · Last modified: 2024/10/08 08:35 by alexander01998