Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Buffer Over-read Affecting Fragmentation Feature #54

Closed
bathooman opened this issue Mar 19, 2021 · 13 comments
Closed

Buffer Over-read Affecting Fragmentation Feature #54

bathooman opened this issue Mar 19, 2021 · 13 comments

Comments

@bathooman
Copy link

Hello,

I have tried to submit this through Eclipse Bugzilla. However, apparently, the dedicated page for TinyDTLS has been removed or I did not have access to it. The following over-read has been found by using symbolic execution technique. Suppose you have a Client Key Exchange message with the following values for the mentioned fields:

  • length in the record layer = 25
  • length in the handshake layer = 51
  • fragment length = 16
  • fragment offset = 0

When we are handling the message (in dtls_handle_message()), data_length = decrypt_verify(peer, msg, rlen, &data) is called. It returns a pointer to the first offset of the handshake layer (uint8 *data) and the real size of the handshake layer (data_length = 25). Later, as we have packet_length=51 > fragment_length=16, we start the reassembling process. the message also pass fragment_offset=0 + fragment_length=16 <= packet_length=51 check. Finally, after allocating memory on the heap (peer->handshake_params->reassemble_buf->data), when we are going to copy the message (memcpy) into the newly allocated buffer, we use the combination of fragment offset and fragment length to derive the size of the handshake message and the pointer (data) as it points to the first offset of the handshake layer. Let me remind you that the valid range for data is 25 (=data_length). However, in this scenario, the size (fragment_length=16 + handshake_header_size=12) is 28. Therefore, there is a 3-byte overread happening.
I have attached the means to reproduce the mentioned bug. To do so, after downloading the suite, in tinydtls-witness/tests, execute the script setupserver.sh to compile TinyDTLS. Note that, I have de-randomized TinyDTLS for this demonstration and the patch is available in tinydtls-witness/tests/patches. Successful execution of setupserver.sh will run dtls-server on port 20220. Now, executing ./reproduce.sh in tinydtls-witness/tests should cause dtls-server to crash.
Screenshot from 2021-03-19 18-44-03

The malformed Client Key Exchange can be found in tinydtls-witness/tests/testcases.

I hope I could explain it understandably.
tinydtls-witness.zip

Best wishes,
Hooman Asadian

@obgm
Copy link
Contributor

obgm commented Mar 19, 2021

Thanks! Opening the issue here in Github is exactly the correct way to proceed.

@mrdeep1
Copy link
Contributor

mrdeep1 commented Mar 24, 2021

@bathooman There is a fix to the feature/handshake_fragmentation branch at #62 for testing out.

@bathooman
Copy link
Author

Hello,

Unfortunately, the pull request does not address the mentioned bug in any way. I reviewed the new check and it appears that it checks for fragmented Client Hello messages. However, the above-mentioned bug can occur in any fragmented message. Above, I explained the case that it happens for a fragmented Client Key Exchange. It is worth mentioning that the pull request addresses another issue with the fragmentation in TinyDTLS and correctly rejects fragmented Client Hello messages.

Regards,
Hooman

@mrdeep1
Copy link
Contributor

mrdeep1 commented Mar 25, 2021

I was not able to reproduce the original reported issue. Complicated by multiple issues raised with the same named zip file containing different files to reproduce an issue across the multiple issues raised. Furthermore, the contents of the example patch files (to de-randomize some items) did not match the content of the source files (even after ignoring the helpful additions of assert().

In this case, peer was not set, so not sure how things like peer->handshake_params->reassemble_buf->data ever worked (I certainly got a segmentation violation at this point). Hence the fix to rejecting the fragmented ClientHello if the Peer was unknown.

With the #62 code are you still able to reproduce a similar issue for any other handshake message?

@mrdeep1
Copy link
Contributor

mrdeep1 commented Mar 25, 2021

With your code (untouched other than to add in the -ggdb3 CFLAGS option), I get

....
Mar 25 13:58:01 DEBG got 38 bytes from port 40762
Mar 25 13:58:01 DEBG dtls_handle_message: PEER NOT FOUND
Mar 25 13:58:01 DEBG peer addr: [::1]:40762
Mar 25 13:58:01 DEBG got packet 22 (38 bytes)
Mar 25 13:58:01 DEBG receive header: (13 bytes):
00000000 16 FE FD 00 00 00 00 00  00 00 02 00 19 
Mar 25 13:58:01 DEBG receive unencrypted: (25 bytes):
00000000 10 00 00 33 00 02 00 00  00 00 00 10 00 0F 43 6C 
00000010 69 65 6E 74 5F 69 64 65  6E 
Mar 25 13:58:01 DEBG received fragmented handshake packet: length 51, fragment length 16.
Mar 25 13:58:01 DEBG received first packet of fragmented.

Program received signal SIGSEGV, Segmentation fault.
0x0000000000409c7c in handle_handshake (ctx=0x61b010, peer=0x0, 
    session=0x7fffffffe7b0, role=DTLS_SERVER, 
    state=DTLS_STATE_WAIT_CLIENTHELLO, data=0x61a5ad "\020", data_length=25)
    at dtls.c:3663
3663          dtls_free_reassemble_buffer(peer->handshake_params);
Missing separate debuginfos, use: debuginfo-install glibc-2.12-1.209.el6_9.2.x86_64
(gdb) where
#0  0x0000000000409c7c in handle_handshake (ctx=0x61b010, peer=0x0, 
    session=0x7fffffffe7b0, role=DTLS_SERVER, 
    state=DTLS_STATE_WAIT_CLIENTHELLO, data=0x61a5ad "\020", data_length=25)
    at dtls.c:3663
#1  0x000000000040b0e0 in dtls_handle_message (ctx=0x61b010, 
    session=0x7fffffffe7b0, 
    msg=0x61a5a0 "\026\376", <incomplete sequence \375>, msglen=38)
    at dtls.c:4121
#2  0x0000000000401925 in dtls_handle_read (ctx=0x61b010) at dtls-server.c:177
#3  0x0000000000402014 in main (argc=2, argv=0x7fffffffeae8)
    at dtls-server.c:355
(gdb) p peer
$1 = (dtls_peer_t *) 0x0
(gdb)

Hence my fix.

@mrdeep1
Copy link
Contributor

mrdeep1 commented Mar 25, 2021

@bathooman #62 has been updated which should fix your initially reported issue.

@bathooman
Copy link
Author

Dear @mrdeep1

The reproduce.sh script was not using netcat in the initial witness. So, after your fix, you could not reproduce the bug I am talking about. I will attach a new witness so you can reproduce it
tinydtls-fixes_54.zip
.

@mrdeep1
Copy link
Contributor

mrdeep1 commented Mar 25, 2021

Yes, #62 fixes this issue from my perspective when testing with the latest provided zip file.

@bathooman
Copy link
Author

Thank you for the PR. However, unfortunately, The PR partially solves the issue. Suppose you have a fragmented Client Key Exchange message consisting of two fragments: f1 and f2. The first fragment (f1) looks like the following:

image

Everything is fine with the f1. Now, suppose we have the malformed second fragment (f2) that looks like the following:

image

In f2 the length field in the Handshake Protocol is maliciously changed to 129 and the Fragment Length is also maliciously changed to 8 while we only have 4 bytes in this fragment. When TinyDTLS is processing f2 this check will be passed (13 + 8 <= 129). Then we advance the pointer (data). So now we are only allowed to read 4 bytes (the real length of the fragment). Later, the newly added check will also be passed (8 <= 12). So again, we 4 bytes of over-read here (assert(data_length >= fragment_length + sizeof(dtls_handshake_header_t)) will fail again). The source of such bugs in TinyDTLS is the fact that the real length of the received data (data_length in this case) is not considered as the base for handling the data and we manipulate the data based on the fields on Handshake Header.
I have attached the means to reproduce the mentioned bug. To do so, after downloading the suite, in tinydtls-fixes_54/tests, execute the script setupserver.sh to compile TinyDTLS. Note that, I have de-randomized TinyDTLS for this demonstration. Successful execution of setupserver.sh will run the dtls-server on port 20220. Now, executing ./reproduce.sh in tinydtls-fixes_54/tests should cause the assertion to fail.
tinydtls-fixes_54.zip

Best,
Hooman

@mrdeep1
Copy link
Contributor

mrdeep1 commented Mar 26, 2021

#62 Update (which has #63 fix included) handles this. Note that as per RFC6347 4.2.3 your f2 fragment is not a follow on to f1 as the handshake length is different. Likewise, the Handshake Type and Message Sequence cannot change.

@bathooman
Copy link
Author

Dear @mrdeep1

PR #62 resolves the memory error when we are reassembling the fragments. It is also very nice in the sense that it solves the non-conformance bugs regarding Message Sequence equality and Message Length Equality between fragments. I will start reporting non-conformance bugs when we are finished with memory errors. However, It still does not solve the one-byte memory over-read I reported in #55. Suppose we have a malformed fragmented Server Hello as the following:

image

Note that the length field in the Handshake Protocol is equal to Fragment Length. That indicates that we do not have any fragmentation. However, the Server Hello is fragmented. Therefore, we will still have the over-read here.

I suggest that TinyDTLS always uses the real data length instead of the values different fields contain.

Best,
Hooman

@mrdeep1
Copy link
Contributor

mrdeep1 commented Mar 29, 2021

@bathooman It is good that you are doing all this code checking - thanks.

There are two distinct branches of code here feature/handshake_fragmentation and develop (master will get updated at some point, so is not relevant there). I am currently fixing fragmentation specific issues in feature/handshake_fragmentation and any other issues in develop and do not really want to back-port fixes into feature/handshake_fragmentation. At some point, feature/handshake_fragmentation will get merged into develop (I do not have that authority).

It is my belief that the over-read by one is fixed in develop's #61.

@bathooman
Copy link
Author

Yes. If feature/handshake_fragmentation will get merged into develop, the bug will be resolved as you have solved it in #61.

Best,
Hooman

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants