Skip to content

Model-Generated HTML Output Not Showing Up in the User Interface (UI) #4240

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
3 tasks done
Ahwar opened this issue Feb 19, 2025 · 3 comments · May be fixed by #5560
Open
3 tasks done

Model-Generated HTML Output Not Showing Up in the User Interface (UI) #4240

Ahwar opened this issue Feb 19, 2025 · 3 comments · May be fixed by #5560
Assignees
Labels
area:chat Relates to chat interface good-first-issue Suggested issue for new contributors ide:vscode Relates specifically to VS Code extension kind:bug Indicates an unexpected problem or unintended behavior os:windows Happening specifically on Windows priority:medium Indicates medium priority

Comments

@Ahwar
Copy link

Ahwar commented Feb 19, 2025

Before submitting your bug report

Relevant environment info

- OS: Windows 11 Pro for Workstations, 24H2, OS build is 26100.3194
- Continue version: 0.8.68
- IDE version: VSCode 1.96.2 (modified by cursor AI, version 0.45.11)
- Model: llama3.1
- config.yaml:
  models:
    - model: claude-3-5-sonnet-latest
      provider: anthropic
      api_key: ""
      title: Claude 3.5 Sonnet
    - model: deepseek-r1:8b
      title: deepseek-r1:8b
      provider: ollama
    - model: deepseek-r1:1.5b
      title: deepseek-r1:1.5b
      provider: ollama
    - model: llama3.1
      title: llama3.1
      provider: ollama
    - model: qwen2.5-coder
      title: qwen2.5-coder
      provider: ollama
    - model: qwen2.5-coder:3b
      title: qwen2.5-coder:3b
      provider: ollama
  tabAutocompleteModel:
    title: qwen2.5-coder:3b
    provider: ollama
    model: qwen2.5-coder:3b
  contextProviders:
    - name: code
      params: {}
    - name: docs
      params: {}
    - name: diff
      params: {}
    - name: terminal
      params: {}
    - name: problems
      params: {}
    - name: folder
      params: {}
    - name: codebase
      params: {}
  slashCommands:
    - name: share
      description: Export the current chat session to markdown
    - name: cmd
      description: Generate a shell command, explain after showing the command, not before.
    - name: commit
      description: Generate a git commit message

Description

Issue with Displaying HTML Code

When I requested the chatbot to give me response in HTML code, it failed to render it on its interface. However, when I shared the prompt using the /share command or copied the response, the HTML code was present in the text, but not rendered in the UI.

model response by using /share command:

20250219T145432_session.md

Analysis of the Issue

I suspect that the chatbot is attempting to render the HTML, but this feature has been disabled somehow. This suggests that there may be a configuration issue or a bug preventing the correct rendering of HTML code.

Attempted Solutions

To resolve the issue, I tried restarting the IDE, but this did not resolve the problem.

Additional Information

I have attached the chat session history in _session.md file, which contains more details about this issue.

I have the following information about my IDE for you.

VSCode Version: 1.96.2
Commit: f5f18731406b73244e0558ee7716d77c8096d150
Date: 2025-02-07T09:43:58.555Z
Electron: 32.2.6
Chromium: 128.0.6613.186
Node.js: 20.18.1
V8: 12.8.374.38-electron.0
OS: Windows_NT x64 10.0.26100

following is the info about the operating system:

Edition	Windows 11 Pro for Workstations
Version	24H2
Installed on	‎04/‎01/‎2025
OS build	26100.3194
Experience	Windows Feature Experience Pack 1000.26100.48.0

Please let me know if you have any questions.

To reproduce

Attempt to reproduce an issue using the Llama 3.1 model via Ollama.

  1. Ask the chatbot to display a table in Markdown format.
  2. Request it to render the same table in HTML format.
  3. Note the issue that arises.

screenshot:

Image

Log output

==========================================================================
==========================================================================
##### Completion options #####
{
  "contextLength": 8096,
  "model": "llama3.1",
  "maxTokens": 4096,
  "stop": [
    "<|start_header_id|>",
    "<|end_header_id|>",
    "<|eot_id|>"
  ]
}

##### Request options #####
{}

##### Prompt #####
<user>
convert to markdown table:
SuperGLUE Tasks / Benchmarks:
Short NameFull NameDescriptionBoolQBoolean QuestionsInvolves answering a yes/no question based on a short passage.CBCommitmentBankTests understanding of entailment and contradiction in a three-sentence format.COPAChoice of Plausible AlternativesMeasures causal reasoning by asking for the cause/effect of a given sentence.MultiRCMulti-Sentence Reading ComprehensionInvolves answering questions about a paragraph where each question may have multiple correct answers.ReCoRDReading Comprehension with Commonsense ReasoningRequires selecting the correct named entity from a passage to fill in the blank of a question.RTERecognizing Textual EntailmentInvolves identifying whether a sentence entails, contradicts, or is neutral towards another sentence.WiCWords in ContextTests understanding of word sense disambiguation in different contexts.WSCWinograd Schema ChallengeFocuses on resolving coreference resolution within a sentence, often requiring commonsense reasoning.AX-bBroad Coverage DiagnosticA diagnostic set to evaluate model performance on a broad range of linguistic phenomena.AX-gWinogender Schema DiagnosticsTests for the presence of gender bias in automated coreference resolution systems.

<assistant>
Here is the text converted to a markdown table:

| **Short Name** | **Full Name** | **Description** |
| --- | --- | --- |
| BoolQ | Boolean Questions | Involves answering a yes/no question based on a short passage. |
| CB | CommitmentBank | Tests understanding of entailment and contradiction in a three-sentence format. |
| COPA | Choice of Plausible Alternatives | Measures causal reasoning by asking for the cause/effect of a given sentence. |
| MultiRC | Multi-Sentence Reading Comprehension | Involves answering questions about a paragraph where each question may have multiple correct answers. |
| ReCoRD | Reading Comprehension with Commonsense Reasoning | Requires selecting the correct named entity from a passage to fill in the blank of a question. |
| RTE | Recognizing Textual Entailment | Involves identifying whether a sentence entails, contradicts, or is neutral towards another sentence. |
| WiC | Words in Context | Tests understanding of word sense disambiguation in different contexts. |
| WSC | Winograd Schema Challenge | Focuses on resolving coreference resolution within a sentence, often requiring commonsense reasoning. |
| AX-b | Broad Coverage Diagnostic | A diagnostic set to evaluate model performance on a broad range of linguistic phenomena. |
| AX-g | Winogender Schema Diagnostics | Tests for the presence of gender bias in automated coreference resolution systems. |

<user>
convert the table in html

==========================================================================
==========================================================================
##### Completion options #####
{
  "contextLength": 8096,
  "model": "llama3.1",
  "maxTokens": 12,
  "stop": [
    "<|start_header_id|>",
    "<|end_header_id|>",
    "<|eot_id|>"
  ]
}

##### Request options #####
{}

##### Prompt #####
<user>
Given the following... please reply with a title for the chat that is 3-4 words in length, all words used should be directly related to the content of the chat, avoid using verbs unless they are directly related to the content of the chat, no additional text or explanation, you don't need ending punctuation.

Here is the text converted to a markdown table:

| **Short Name** | **Full Name** | **Description** |
| --- | --- | --- |
| BoolQ | Boolean Questions | Involves answering a yes/no question based on a short passage. |
| CB | CommitmentBank | Tests understanding of entailment and contradiction in a three-sentence format. |
| COPA | Choice of Plausible Alternatives | Measures causal reasoning by asking for the cause/effect of a given sentence. |
| MultiRC | Multi-Sentence Reading Comprehension | Involves answering questions about a paragraph where each question may have multiple correct answers. |
| ReCoRD | Reading Comprehension with Commonsense Reasoning | Requires selecting the correct named entity from a passage to fill in the blank of a question. |
| RTE | Recognizing Textual Entailment | Involves identifying whether a sentence entails, contradicts, or is neutral towards another sentence. |
| WiC | Words in Context | Tests understanding of word sense disambiguation in different contexts. |
| WSC | Winograd Schema Challenge | Focuses on resolving coreference resolution within a sentence, often requiring commonsense reasoning. |
| AX-b | Broad Coverage Diagnostic | A diagnostic set to evaluate model performance on a broad range of linguistic phenomena. |
| AX-g | Winogender Schema Diagnostics | Tests for the presence of gender bias in automated coreference resolution systems. |

==========================================================================
==========================================================================
Completion:
Natural Language Tasks

==========================================================================
==========================================================================
##### Completion options #####
{
  "contextLength": 8096,
  "model": "llama3.1",
  "maxTokens": 4096,
  "stop": [
    "<|start_header_id|>",
    "<|end_header_id|>",
    "<|eot_id|>"
  ]
}

##### Request options #####
{}

##### Prompt #####
<user>
convert to markdown table:
SuperGLUE Tasks / Benchmarks:
Short NameFull NameDescriptionBoolQBoolean QuestionsInvolves answering a yes/no question based on a short passage.CBCommitmentBankTests understanding of entailment and contradiction in a three-sentence format.COPAChoice of Plausible AlternativesMeasures causal reasoning by asking for the cause/effect of a given sentence.MultiRCMulti-Sentence Reading ComprehensionInvolves answering questions about a paragraph where each question may have multiple correct answers.ReCoRDReading Comprehension with Commonsense ReasoningRequires selecting the correct named entity from a passage to fill in the blank of a question.RTERecognizing Textual EntailmentInvolves identifying whether a sentence entails, contradicts, or is neutral towards another sentence.WiCWords in ContextTests understanding of word sense disambiguation in different contexts.WSCWinograd Schema ChallengeFocuses on resolving coreference resolution within a sentence, often requiring commonsense reasoning.AX-bBroad Coverage DiagnosticA diagnostic set to evaluate model performance on a broad range of linguistic phenomena.AX-gWinogender Schema DiagnosticsTests for the presence of gender bias in automated coreference resolution systems.

<assistant>
Here is the text converted to a markdown table:

| **Short Name** | **Full Name** | **Description** |
| --- | --- | --- |
| BoolQ | Boolean Questions | Involves answering a yes/no question based on a short passage. |
| CB | CommitmentBank | Tests understanding of entailment and contradiction in a three-sentence format. |
| COPA | Choice of Plausible Alternatives | Measures causal reasoning by asking for the cause/effect of a given sentence. |
| MultiRC | Multi-Sentence Reading Comprehension | Involves answering questions about a paragraph where each question may have multiple correct answers. |
| ReCoRD | Reading Comprehension with Commonsense Reasoning | Requires selecting the correct named entity from a passage to fill in the blank of a question. |
| RTE | Recognizing Textual Entailment | Involves identifying whether a sentence entails, contradicts, or is neutral towards another sentence. |
| WiC | Words in Context | Tests understanding of word sense disambiguation in different contexts. |
| WSC | Winograd Schema Challenge | Focuses on resolving coreference resolution within a sentence, often requiring commonsense reasoning. |
| AX-b | Broad Coverage Diagnostic | A diagnostic set to evaluate model performance on a broad range of linguistic phenomena. |
| AX-g | Winogender Schema Diagnostics | Tests for the presence of gender bias in automated coreference resolution systems. |

<user>
convert the table in html

==========================================================================
==========================================================================
Completion:
Here is the markdown table converted to HTML:

<table>
  <tr>
    <th>Short Name</th>
    <th>Full Name</th>
    <th>Description</th>
  </tr>
  <tr>
    <td>BoolQ</td>
    <td>Boolean Questions</td>
    <td>Involves answering a yes/no question based on a short passage.</td>
  </tr>
  <tr>
    <td>CB</td>
    <td>CommitmentBank</td>
    <td>Tests understanding of entailment and contradiction in a three-sentence format.</td>
  </tr>
  <tr>
    <td>COPA</td>
    <td>Choice of Plausible Alternatives</td>
    <td>Measures causal reasoning by asking for the cause/effect of a given sentence.</td>
  </tr>
  <tr>
    <td>MultiRC</td>
    <td>Multi-Sentence Reading Comprehension</td>
    <td>Involves answering questions about a paragraph where each question may have multiple correct answers.</td>
  </tr>
  <tr>
    <td>ReCoRD</td>
    <td>Reading Comprehension with Commonsense Reasoning</td>
    <td>Requires selecting the correct named entity from a passage to fill in the blank of a question.</td>
  </tr>
  <tr>
    <td>RTE</td>
    <td>Recognizing Textual Entailment</td>
    <td>Involves identifying whether a sentence entails, contradicts, or is neutral towards another sentence.</td>
  </tr>
  <tr>
    <td>WiC</td>
    <td>Words in Context</td>
    <td>Tests understanding of word sense disambiguation in different contexts.</td>
  </tr>
  <tr>
    <td>WSC</td>
    <td>Winograd Schema Challenge</td>
    <td>Focusing on resolving coreference resolution within a sentence, often requiring commonsense reasoning.</td>
  </tr>
  <tr>
    <td>AX-b</td>
    <td>Broad Coverage Diagnostic</td>
    <td>A diagnostic set to evaluate model performance on a broad range of linguistic phenomena.</td>
  </tr>
  <tr>
    <td>AX-g</td>
    <td>Winogender Schema Diagnostics</td>
    <td>Tests for the presence of gender bias in automated coreference resolution systems.</td>
  </tr>
</table>

Let me know if you'd like any further modifications.
@dosubot dosubot bot added area:chat Relates to chat interface ide:vscode Relates specifically to VS Code extension kind:bug Indicates an unexpected problem or unintended behavior labels Feb 19, 2025
@tomasz-stefaniak tomasz-stefaniak added priority:medium Indicates medium priority good-first-issue Suggested issue for new contributors and removed needs-triage labels Feb 27, 2025
@tomasz-stefaniak
Copy link
Collaborator

@Ahwar thanks for reporting this, I'm tagging this ticket as good-first-issue.

@aadarshkt
Copy link
Contributor

@Ahwar,

My continue extension doesn't show this error. HTML output is correctly generated.

Image

May be try using a different model?

@Ahwar
Copy link
Author

Ahwar commented Mar 15, 2025

@Ahwar,

My continue extension doesn't show this error. HTML output is correctly generated.

Image May be try using a different model?

Because the HTML code in your model is inside a code block, if you see my model's raw output in the "Log output" heading of the issue post, it means the code my model generated was not enclosed in a code block. Consequently, the extension interprets it as HTML. Since Visual Studio Code is built using HTML, this interferes with the editor's HTML rendering, preventing it from displaying correctly or at all.

On another note, while it might not be relevant, you're using a Gemini 2.0 Flash model, which is an API model known for generating accurate output. In contrast, I'm using a local model through Ollama, as specified in the post.

Edit: Changing the model can resolve the issue, but this issue can still arise at any time for users.

@dosubot dosubot bot added the os:windows Happening specifically on Windows label May 7, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area:chat Relates to chat interface good-first-issue Suggested issue for new contributors ide:vscode Relates specifically to VS Code extension kind:bug Indicates an unexpected problem or unintended behavior os:windows Happening specifically on Windows priority:medium Indicates medium priority
Projects
None yet
Development

Successfully merging a pull request may close this issue.

3 participants