Joined: |
May 31, 2011 04:17 AM |
Last Post: |
Jun 22, 2020 04:31 PM |
Last Visit: |
Jul 1, 2020 01:28 PM |
Website: |
|
Location: |
|
Occupation: |
|
Interests: |
|
|
AIM: |
|
ICQ: |
|
MSN IM: |
|
Yahoo IM: |
|
|
jmv has contributed to 19 posts out of 21251 total posts
(0.09%) in 4,879 days (0.00 posts per day).
20 Most recent posts:
Hi Gary,
just a couple of lines in order to let you know that, by means of optimizing our code (mostly trying to avoid race conditions and time-costly locks), I've been able to keep the L2 disconnections at a minimum (perhaps twice or three times a day, in the busiest day moments).
So the problem is _nearly_ solved. However, there's one thing I forgot to mention in my initial problem description: for years, I've been running iqconnect.exe via wine from linux, which I am quite sure it gives some (if not quite a) performance penalty.
So, after all, I'm quite sure right now that those two or three "spikes" would be handled correctly by iqconnect.exe , should it be natively run from within a Windows machine. Within the next few days, I will set up a Windows Sever 2016 instance in order to try whether this is the case.
Other than that, we're pretty satisfied with our performance right now.
thanks for your time and support !
Thanks for your kind offer ,
as of yesterday's experiments, I've found out that tinkering with granularity (i.e., thread locking frequency) helps yielding better results. Yesterday, I got only a final (and single) L2 disconnection event, at market's close (16:00EST) which was to be expected, since it's the busiest moment of day (along with the stock market opening at 09:30EST, this is). This is much better than previous days, when I was getting several L2 disconnections *per hour*.
This tells me that the problem is at my end (most probably, excessive thread locking is still my problem). I think I can fix this by toying a bit more with my system's granularity. Should this not be the case, I will send you the source code of my stripped-out as-simple-as-possible test mule barebones L2 receiver, so that you can check it out for yourself.
thanks for you time and great support,
The language I'm using is Java (openJDK 14), and I'm asking for seven symbols : ES,NQ,YM,6E,6J,6B, BTC .
I'm just toying a bit with the system's granularity right now... I'm checking the receiver buffer a bit less often (every 25ms instead of 5ms which was before). I'll let you know if this eases the problem.
However, the real test will come when markets close (i.e., 16:00EST ) since this is one of the busiest events in the day, from a L2 perspective. So in three and a half hours we will be able to see whether we still hit this problem again.
For me, it's highly suspicious that the problem only manifests itself in the busiest market moments (i.e., when the L2 message flow is at its peak). Either my receiving software(*) is the bottleneck, or my latency is causing the problem, I'm afraid.
(*)note: as for my software, I've also tried raising the socket buffer size, just to be sure that I wasn't discarding L2 messages due to a network bufer overrun... but that doesn't seem to be the problem, either. Edited by jmv on Jun 16, 2020 at 11:30 AM Edited by jmv on Jun 16, 2020 at 11:33 AM Edited by jmv on Jun 16, 2020 at 11:36 AM Edited by jmv on Jun 16, 2020 at 11:37 AM
Hi there,
recently, I've spent some time modifying my trading application in order to keep a full real-time visual snapshot or "view" of the DOM landscape. While the visual result is very fancy and all of that (d'oh!), in the process of toying around with IQFeed's L2 stream I've come across a problem which I'd like to share with you.
Thing is, L2 can be at times a really busy stream, even more so if you happen to pick a busy symbol (NQ, ES, YM...). I've come to record incoming L2 message ratios well in excess of 1500 L2 messages per second coming from IQFeed during busy market hours (stock maket open at 09:30EST, close at 16:00EST, etc.. Normal, "quiet" periods tend to average 200-500 messages per sec.
The problem I'm experiencing is that, the moment the L2 message ratio reaches aound the 1000 mps (messages per second) mark, my IQClient app starts losing L2 messages, and it will disconnect / reconnect to your L2 servers. Here's an example from the logs:
------------ STATUS Information 36 0 2020-06-16 16:21:38 No data from Level 2 server in 7s. Reconnecting. STATUS Information 36 0 2020-06-16 16:21:38 Attempting to reconnect to Level 2 server. STATUS Information 36 0 2020-06-16 16:21:43 Connected to Level 2 server.
STATUS Information 36 0 2020-06-16 16:40:03 No data from Level 2 server in 43s. Reconnecting. STATUS Information 36 0 2020-06-16 16:40:03 Attempting to reconnect to Level 2 server. STATUS Information 36 0 2020-06-16 16:41:09 Connected to Level 2 server.
STATUS Information 36 0 2020-06-16 16:41:48 No data from Level 2 server in 39s. Reconnecting. STATUS Information 36 0 2020-06-16 16:41:48 Attempting to reconnect to Level 2 server. STATUS Information 36 0 2020-06-16 16:42:11 Connected to Level 2 server. ----------
Sometimes, it will recconect itself in mere seconds (3-4 seconds), while other times one or two minutes can pass until it will recconect itself to the L2 server.
I know that if my end cannot keep up with IQFeed's server, your host will likely drop my connection, hence these disconnections could be due to my system not being able to cope with one of these "L2 storms" which it's happening during market busy hours.
The problem I see here is that I already tried to make a very simple, stripped-down L2 client just to make a proof-of-concept of how many L2 messages per second I would be able to catch. Even with a simple read-and-discard strategy (no L2 message storage, no ascii parsing, no dynamic structures used, no thread locking, no synchronization locks, etc) and then, I keep getting kicked out of your L2 servers during _Really_ busy market events (opening, close, etc.).
So, if the software isn't blocking your stream flow... could it be just a latency problem ? My servers have a not-so-nice 180-190ms ping round trip to IQFeed servers. With as high a latency as that, am I facing a structural problem here, and no matter how fast my software is, I won't be able to keep up with a fast moving L2 stream ? (unless I opt for leaving Europe and collocating my servers within USA, of course..)
thanks for your time, Edited by jmv on Jun 16, 2020 at 10:18 AM Edited by jmv on Jun 16, 2020 at 10:18 AM Edited by jmv on Jun 16, 2020 at 10:20 AM
I concur with one541,
Having been developing with Interactive Broker's API for a few years (if you want some real developer action, go give it a try, you will see what I mean), nowadays I tend to appreciate the raw simplicity of iqFeed.
Granted, from a bandwidth standpoint, an ascii-based protocol might not be the most optimal solution, but in these days and ages of 100+ Mbps domestic FTTH internet connections, this should be of no concern.
As for having to parse the incoming strings... well, even if iqFeed was to arrive in binary format, at the end of the day we would have to "parse" it too, i.e., re-adapt it to our own application classes/structures/etc ... be it ascii or binary, some degree of parsing is unavoidable.
Just my two cents, YMMV !!
regards,
Hi,
iqconnect.exe does a ping round trip against its DTN servers (you can see the ping results at IQConnect.log when you stop the feed and iqconnect.exe exits). Thing is, ping uses ICMP protocol, which in linux is somewhat privileged.
So, you need to give wine the appropiate permissions in order to be able to use ICMP. Running wine as root in order to circumvent this problem would be overkill (besides a very bad thing to do!), but fortunately you can use setcap in order to grant permissions in a much more granular way.
First, locate where your wine-preloader file is. In my case, it's on /usr/bin/wine-preloader . Then, type (yoo will need to sudo for this):
sudo setcap cap_net_raw+epi /usr/bin/wine-preloader
and that's all. Now wine is allowed to use ICMP protocol, which in turn will allow IQconnect.eze to make its "ping things" without complaining xD
trade well !
Jose,
09/03 is again back on track as of now.
Thanks for a great customer support,
We have found further evidence that something seems to be wrong with the 03/09 day.
Since we usually run a daily HTD early in the morning, we've checked the HTD we ran yesterday ( 04/09 ). Such HTD gave us 03/09 day too, of course.
Much to our surprise, the 03/09 day within yesterday's HTD request is _very_ different from the 03/09 day which is returned by today's HTD request.
Which is really strange since, both today and yesterday, 03/09 is supposed to be a "settled" day at DTN's truetick database. This would mean that, somehow, DTN's database for day 03/09 has changed and/or been modified between yesterday and today.
Strange as it might sound, this is what we're seeing right now. We have been retrieving truetick data, both via HTD and realtime, from IQFEED for some years and on a daily basis, so we are pretty sure about these findings.
thanks for your help,
Hello,
I was wondering if anyone is getting "weird" (so to say) tick data for that day (03/09) for DAX futures (iqfeed symbol: XG#).
An HTD request, say, for the last 20 days of tick data, will return all other days "normal", but that one (09/03) seems strange.
For one, its size DOUBLES the average size of the other days (this is, you could say as if that day is showing twice the normal/average number of ticks for a standard DAX trading day).
Also, the volume data which this file holds is inconsistent with what happened that day (I followed the session real-time, as everyday we do, and I am pretty sure there was no such activity spike to be found that day).
Just wondering if everything's ok with that day... retrieving it via HTD gives a really, really strange result.
thanks,
I, for one, will wait until the whole thing is fixed; my ATS relies heavily on the feed's consistency, and a hole-filled, gruyere cheese-style feed is perhaps not the best thing to feed my ATS with...
I'll definitely wait for the feed to get fixed, yes .
Hello,
right now lag is +4 minutes approx. and growing. It gets worse as the day goes on, kinda of accumulative lag. So, it seems that the problem is still there.
Sadly, you're right.... right now I'm getting no less than 15 minutes of lag for FDAX@DTB.
Field 18 (timestamp) from DTN's feed keeps saying that CST time is 03:08, when in reality right now is 03:23 ....
This has been happening for, at least, three or four days now. I do really hope they'll fix this asap, since trading this way is next to impossible.
Hi Steve,
thanks a lot; yes, I was a little confused by the first post as you say, but now it's all crystal clear.
I will contact sales dept asap, then, and have them to upgrade my account to RT Eurex L2. Weird as it might seem, I'd swear that I had asked for both L1 and L2 realtime data when I first contacted with my sales rep (since we do use both levels in our daily operations), but since it was around six months ago when we signed up, I cannot tell for sure now.
Anyway, thanks a lot for your prompt answers and help !
Thanks for your answer,
maybe this is not the right place to post this (perhaps should I email to sales dept?) but I think there's some kind of misunderstanding here. I have been working with, and signed up for realtime Eurex with IqFeed for months now (I don't remember the exact time I signed up with DTN IqFeed, but this has been working without problems for at least six months).
In all that time, I haven't modified my IqFeed subscription, nor have I been notified by IqFeed of any changes to it (or at least I cannot recall it right now). What's more, just 10 days ago I was receiving Level 2 data flawlessly.
Thing is, a few days ago (like 9 or 10 days, as said above), Level 2 suddenly stopped sending data. From your post, I understand that, after six months receiving and being subscribed to realtime, somehow _now_ I have been "unsubscribed" to IQFeed's realtime Eurex data service. ¿How could such a change have happened?
This is a very serious issue for our bussiness, really... could you please check why have we been "silently unsubscribed" (so to say) from realtime data, why, and by who? Other than some kind of problem with our credit card payment, I cannot think of any reason for this sudden change to our subscription (and even if that was to be the case, I suppose we should have received some warning email, shouldn't we? )
thanks for your time,
Hello all,
We've been successfully retrieving Level-II data for futures during several months; but several days ago, all in a sudden, Level 2 data stopped arriving.
Watching our logs, we can see the following:
[connects to level 2 tcp port]
[DTN L2] S,SERVER CONNECTED [DTN L2] C [DTN L2] T,2012-01-10 08:35:35
[now we ask for DAX March'2012 symbol by sending "wXGH12\r\n"]
[DTN L2] n,XGH12,XGH12
Which, according to the docs, mean "symbol was not found". Strangely, this has been working for the past months without a single problem.
Even stranger is the fact that Level-1 will work as expected when we send the same request "wXGH12\r\n". This is, we are receiving "symbol not found" for L2, but the same symbol will work flawlessly when requesting Level I data.
Also, we have tried by sending "wXG#", and we keep getting the same results: L1 socket will start sending FDAX-March'12 price data as expected, but L2 will greet us again with a "n,XG#,XG#" message.
Is there any reason why we cannot get L2 data ?
thanks for your answer,
Thanks a lot for your prompt reply!
regards,
Hello all,
I'm getting some strange results from the HTD historical request command. Whenever I send, for example, "HTD,XGZ11,21" (give me the latest 21 days of tick data from fdax) I am getting only tick data for a week, from Monday 07/11 (today) up to Monday 31/10 (included), despite the fact I asked for three weeks.
Am I doing something wrong here?
thanks a lot for your time,
After several attempts, I finally got a "cannot connect" java IO exception regarding the L2 port (9200). Which led me to think that perhaps I was not closing the client sockets in a clean way and, somehow, after a few reconnections port 9200 got "stucked" (i.e., permanently bound to an old process, thus unable to get bound by new software instances).
I found a couple of cases where the port closure wasn't clean: I've fixed it, thus let's see whether it will work. For now, today I've got my L2 data back.
thanks
Hello all,
I'm working with IQFeed's API and everything works great. However, there seems to be some special occurrences where L2 data is just not available. I mean, everything goes ok, the API gets connected, and then I receive the normal startup messages:
-------------------------------- [DTN] IQConnectStatus [0,0]
[DTN L1] S,KEY,9796 [DTN L1] S,SERVER CONNECTED [DTN L1] S,IP,66.112.148.180 60004,66.112.148.211 60002,66.112.148.212 60009,66.112.148.213 60001,66.112.148.211 60005,66.112.148.211 60012,66.112.148.200 60003,66.112.148.214 60015,66.112.148.212 60050,66.112.148.213 60014,66.112.148.212 60016,66.112.148.212 60018 [DTN L1] S,CUST,real_time,66.112.148.112,60003,xxxxxxxxxxxxx,4.7.0.9,0, L2_SERV EUREX L2 ,,500,QT_API,,
---------------------------------
Now I send "wXGM11\r\n" to port 5200, and the contract's L1 data starts flowing. No problem here. But then, when I send "wXGM11\r\n" to 9200 (L2 port), "usually" it will work ok. I say "usually" because at least twice (being today the second one, by the way) IQFeed server will refuse to send the L2 data.
No matter if I disconnect/recconect IQLink, or whether I try restarting a number of times my software's connection. L2 data just won't arrive.
Admittedly, this has only happned twice... but it is nonetheless a worrying situation, because you can have a whole day without L2 data before IQ servers "decide" to send you the L2 data again (last time, I had no L2 data until the following day. Today , I do not know how much it will last)
has someone experienced this problem?
thanks for your time,
|
|