Level Extreme platform
Subscription
Corporate profile
Products & Services
Support
Legal
Français
Future of the Webrowser Control
Message
From
26/05/2021 06:25:05
 
General information
Forum:
Visual FoxPro
Category:
ActiveX controls in VFP
Miscellaneous
Thread ID:
01680690
Message ID:
01680736
Views:
47
>Because it doesn't give you integration into your app. It's a separate window that you can't easily interact with - you can't easily handle events because it's not an activeX control etc. etc. There's lots of reason why you want an **embedded** control in form. Internet Explorer relies on those same components plus some additional ones to manage the host shell.
>
>And most likely that COM object will cease to work, because that actually launches Internet Explorer which will be removed from Windows by default.
>

>>My last DOM tweakings were not made from external objects calling into Trident interfaces, but Javascript functions / objects injected into DOM (calling those interfaces if no direct Javascript call was available)- worked in Trident and pretty certain will work in Chromium if then delegating to W3C DOM interfaces. With a bit of wrapping a common interface for WebBrowser/MSHtml, WebView2(Chromium) and WebDriver(Selenium) should be possible - at least until I find something smarter ;-)
>
>The problem is that the new Chromium controls don't have ActiveX interfaces. And those that do (I think CEF Sharp does) it's still a bear to integrate, plus you have to ship the huge browser runtimes (separate from whatever browser is installed).
>
>Unless you have application integration, none of this help you if you need to access the data from your host application. For example, look at something like Help Builder that uses a FoxPro application to automate a JavaScript based editor or generates output and then renders it in a browser using scripted layout templates. There are probably 100 or so function calls that go from FoxPro into the DOM. Like retrieving and setting the document, setting focus, grabbing selections, setting content in selections, inserting links etc. While it's possible to build an entire UI in JavaScript and keep it all inside of the browser, you still need something to push the data out to the host so it can write it to disk or whatever.
>
>if you're building a JavaScript application that's all or mostly JavaScript I question why you'd be using FoxPro in the first place. If it's self contained in the browser, then it's probably better served to just be a Web application altogether with no need for FoxPro (or FoxPro only as a service backend). But that's a completely different use case than building a hybrid desktop Web application, where the browser is integrated **as part of the desktop application** for something like Help Builder, or >Markdown Monster or even WebSurge which all use this hybrid application model.

The things I started building in the nineties were roboting WebBrowser / IE, doing screenscraping stuff others use BeautifulSoup or newer tools when used to analyze served HTML client side. They reused already existing half smart classification algorithms working off .dbf (late 90ies, started as Pascal/ModulaII rec files, then moved to DB3 dbf in the 80ies, then fox index files - a "Data Esperanto" surviving several backend C/S vendor switches...) and compared the data received as file dump. Spot check verification showed that some data delivered as dump would be changed soon afterwards by few. Running several IE processes was oodles more stable than the version having several pages with WebBrowser and the synch code was already built for a single vfp pool broker...

The "Data Esperanto" aspect of vfp also helping when connecting some scraping effort to state level backends - some used SQL backends from all major players, some opened up only SOAP, bulk data loading formats or RPC interfaces into big iron. Vfp as interpreted data broker still a choice from dev time POV.

>I'm working through upgrading Markdown Monster from the Web Browser control to the new Web View at the moment and there are lot more quirks with the WebView than with the Web Browser control. Performance is considerably worse - it's noticable in the editor. So while the standalone browsers might be much faster and more efficient in an integration in Windows that might all get lost due to the integration interfaces and event forwarding across process boundaries.

>The Web Browser control is actually the easiest and most portable way to get Web content into a form. It's dated but it works with original HTML5 which is quite capable. If you control your own content it's easy to build content that works well with it and looks great. But it is obviously discontinued and stuck at IE11 max rendering level which doesn't get the latest HTML/JS feature. Things like ES6+ and CSS3.5 and various new DOM APIs are not available in the WebBrowser control and never will be. But if you control content it's quite workable to build content that works with IE11 with reasonably recent Web componentary.
>
>The WB control is also stable and faster than any of the Chromium controls I've worked with (both in Fox and .NET) because it's natively integrated where the Chromium controls require internally running services for interprocess communication. It depends on what you need to do - if you control the content you can control the HTML 5/CSS/JavaScript used and as long as one can stay away from ES6 and the latest CSS features IE does a pretty decent job of rendering content. Sure it would be nice to use Chromium and ditch ES5 at minimum, but for integration there's no easy solution to that in FoxPro. To see what I mean you can check out Cristof's CEF session at the previous Virtual FoxFest - it looks terribly finiky to get going. I've used CEF in .NET and even there it was difficult to keep it running reliably. I can only imagine what the extra layer of ActiveX does to stability.

Back then internet speed was laughable and CPUs were not overwhelmed even if marshalling often, as bottleneck was internet speed. Not any more, "chunky, not chatty" becomes relevant, another reason to do more steps without marshalling in-between on the Javascript side. Probably easier with my tasks then yours.

That is MY downside: no control over content, scrape the latest version... So I already looked at his session plus WebView2 and WebDriver interfaces so I can frighten stake holders about possible ways to code their scraping tasks if they decide to ditch Trident - probably still easier compared to going the "headless" BeautifulSoup or other way, as content is changed often.

regards
thomas
Previous
Next
Reply
Map
View

Click here to load this message in the networking platform